0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

Qualities of Residency Applicants:  Comparison of Otolaryngology Program Criteria With Applicant Expectations FREE

Liana Puscas, MD; Scott R. Sharp, MD; Brian Schwab, BS; Walter T. Lee, MD
[+] Author Affiliations

Author Affiliations: Division of Otolaryngology–Head and Neck Surgery, Duke University Medical Center (Drs Puscas, Sharp, and Lee), and Duke University School of Medicine (Mr Schwab), Durham, North Carolina.


Arch Otolaryngol Head Neck Surg. 2012;138(1):10-14. doi:10.1001/archoto.2011.214.
Text Size: A A A
Published online

Objectives To evaluate the criteria used by otolaryngology programs in ranking residency candidates and to compare residency candidate ranking criteria among otolaryngology programs and applicant expectations.

Design Cross-sectional, anonymous survey administered during the 2009 and 2010 match cycles.

Setting Otolaryngology residency programs.

Participants Otolaryngology residency program applicants (PAs) and otolaryngology program directors (PDs).

Main Outcome Measures The PDs were asked to rank the importance of 10 criteria in choosing a residency candidate on a 20-point scale (with 1 indicating utmost importance; 20, not important at all). The PAs were asked to express their expectations of how candidates should be ranked using those same criteria.

Results The interview and personal knowledge of the applicant (mean rank, 3.63) were the most important criteria to PDs, whereas the interview and letters of recommendation (mean rank, 3.65) were the most important criteria among PAs. Likelihood to rank program highly and ethnicity/sex were the least valued by PDs and PAs.

Conclusions Although PDs and PAs agree on the least important criteria for ranking otolaryngology residency candidates, they disagree on the most important criteria. This information provides insight into how programs select residency candidates and how this compares with applicant expectations. Furthermore, this information will assist applicants in understanding how they might be evaluated by programs. Improved understanding of the match process may increase the likelihood of having a good fit between otolaryngology programs and matched applicants.

Figures in this Article

The match administered by the National Resident Matching Program is one of the most important events in the professional life of a physician. Because the match determines the type, length, and location of postgraduate training and adherence to the results of the match is binding for both applicants and programs, the importance of the match result cannot be underestimated. This importance is especially true for competitive specialties, such as otolaryngology–head and neck surgery (OHNS) in which the number of applicants exceeds the number of training spots. In the 2010 match, for example, 395 applicants vied for 280 positions in OHNS.1

Both prospective residents and residency programs have much at stake in achieving a successful outcome in which matched programs and applicants are a good fit with one another. This is reflected in the number of publications across multiple specialties that deal with resident selection criteria.24 This study complements a recent publication5 by our group that investigated the criteria applicants used in selecting an OHNS residency program. This study found significant differences between what applicants chose as the most important criteria in selecting a residency program and what those programs expected.

Using the same respondents and methods, this study addresses the other component of the match process: selecting residency applicants. The objectives of this study were (1) to assess the criteria used by otolaryngology programs in selecting residency candidates and (2) to compare residency candidate ranking criteria among otolaryngology programs and applicant expectation of how candidates should be selected.

Exemption status was granted by the Duke University School of Medicine Institutional Review Board to conduct an anonymous survey study of otolaryngology residency program applicants (PAs) and otolaryngology program directors (PDs) during the 2009 and 2010 match cycles. During the time frame of the study, there were 105 otolaryngology residency programs in the United States.1,6 Because each residency program has its own PD, the PDs were used as proxies for the otolaryngology programs themselves. Using the mailing list maintained by the Society of University Otolaryngologists Program Directors' Association, a solicitation to complete the anonymous survey was sent to every PD. To study the PAs, a solicitation to participate along with a link to the survey was posted on the Web site OtoMatch.com, which is popular among otolaryngology applicants.

After demographic data were obtained, the survey asked respondents to rank in order of importance the following criteria in evaluating prospective residents: ethnicity/sex, extracurricular activities, interview, letters of recommendation, likelihood to rank program highly, medical school grades, personal knowledge of the applicant, reputation of applicant's medical school, research experience, and US Medical Licensing Examination (USMLE) scores. The PDs were asked to rank the importance of these qualities in evaluating prospective residents. The PAs were asked to rank each of these criteria according to how they believed OHNS programs should evaluate applicants. These 10 factors were ranked on a scale of 1 to 20, with 1 indicating most important; 5, very important; 10, important; 15, less important; and 20, not important at all. Ranks were mutually exclusive so no 2 factors could receive the same rating. We used a 1- to 20-point scale rather than a 1- to 10-point scale to allow for greater spread among the individual items. Because of this approach, respondents had greater flexibility to assign relative value to different criteria. For example, if someone thought that 2 items were very important, one could rate those factors as 1 and 2 and begin rating the others at 7. The 10 criteria were drawn from our own residency program's experience and review of the literature.

In collaboration with the Duke Department of Biostatistics and Bioinformatics, comparison between PD criteria and PA expectations were analyzed using the 2-sided Wilcoxon rank sum test. Results are presented as means and standard deviations, however, because the data were distributed in an acceptable pattern for parametric analysis. Subanalysis of data included the following stratifications: program size, city population, geographic location, US News & World Report ranking,7 and availability of protected research time. Comparison of subanalysis groups was performed using an unequal t test to determine whether there were any differentiating trends among PD attitudes toward ranking criteria based on these various program characteristics. Statistical evaluation was completed using SAS statistical software, version 9.2 (SAS Institute Inc).

A total of 41 PDs and 84 PAs completed the survey. One PA survey was censored because of frivolous responses (age older than 84 years, more than 100 publications), leaving 83 for inclusion in the analyses. No duplicate entries were identified. The PA demographic data are given in Table 1, whereas residency program data are reported in Table 2. The Figure reveals the overall mean rank assigned by the PDs and PAs to individual criteria. (Criteria were ranked using a scale of 1-20, with 1 being most important and 20 being not important at all.) The interview and personal knowledge of the applicant were most important to the PDs (mean [SD] rank, 2.63 [2.72] and 3.63 [3.27], respectively), whereas the interview and letters of recommendation were most important among the PAs (mean [SD] rank, 2.55 [1.92] and 3.65 [2.34], respectively). The PDs and PAs agreed on the least important criteria: likelihood to rank program highly (PDs: mean [SD] rank, 14.28 [4.30]; PAs: mean [SD] rank, 13.48 [4.96]) and ethnicity/sex (PDs: mean [SD] rank, 17.15 [4.63]; PAs: mean [SD] rank, 16.31 [5.02]). However, given the standard deviations, these factors were not uniformly ranked as least important by all respondents.

Place holder to copy figure label and caption
Graphic Jump Location

Figure. Applicant criteria were ranked on a scale of 1 to 20, with 1 being most important and 20 being not important at all. Numbers indicate the mean rank that criterion received; error bars, standard deviation; asterisk, significant difference between the 2 groups with the associated P value (Wilcoxon rank sum test). USMLE indicates US Medical Licensing Examination.

Table Graphic Jump LocationTable 1. Demographics of the 83 Program Applicants
Table Graphic Jump LocationTable 2. Demographics of the 41 Residency Programs

For PDs, the top 3 in descending order of importance were the interview (mean [SD] rank, 2.63 [2.72]), personal knowledge of the applicant (mean [SD] rank, 3.63 [3.27]), and USMLE scores (mean [SD] rank, 4.63 [2.78]). For PAs, the top 3 in descending order of importance were the interview (mean [SD] rank, 2.55 [1.92]), letters of recommendation (mean [SD] rank, 3.65 [2.34]), and grades (mean [SD] rank, 5.06 [2.86]).

Significant differences between the 2 groups were found among 4 criteria: personal knowledge of the applicant, letters of recommendation, extracurricular activities, and reputation of the applicant's medical school. The PDs believed that personal knowledge of the applicant was the second most important criterion in assessing a future resident (mean [SD] rank, 3.63 [3.27]), whereas the PAs rated this factor as the fifth most important (mean [SD] rank, 5.65 [4.11]; P = .005). Letters of recommendation were ranked second among the PAs (mean [SD] rank 3.65 [2.34]) and ranked fourth among the PDs (mean [SD] rank, 5.07 [3.10];  = .005). The PAs believed that extracurricular activities were more important (mean [SD] rank, 8.45 [4.22]) than did the PDs (mean [SD] rank, 10.00 [4.06]; P = .01). Finally, reputation of the applicant's medical school was more important to PDs (mean [SD] rank, 8.54 [3.36]) than it was to PAs (mean [SD] rank, 10.96 [4.83]; P = .009). Significant differences were not found among the relative importance to PDs and PAs of interview, USMLE scores, medical school grades, research experience, likelihood to rank program highly, or ethnicity/sex.

Subgroup analyses were performed to determine whether there were any differentiating trends among programs' attitudes toward applicants based on various program characteristics. Programs were compared by size: those programs with 1 to 3 residents per year vs programs with 4 to 5 residents per year. Smaller programs significantly ranked medical school grades at a higher value (mean rank, 4.81) than larger programs (mean rank, 7.80; P = .03). Programs in the top 20 hospitals rated reputation of the applicant's medical school (mean rank, 6.00) significantly higher than programs not in the top 20 (mean rank, 9.25; P = .009). When we examined subgroups based on geographic area, programs differed in several comparisons. Programs in the West valued extracurricular activities significantly higher (mean rank, 3.50) than programs located in the Midwest (mean rank, 11.07; P = .03) or the East (mean rank, 9.96; P = .02). Programs in the West also valued medical school grades (mean rank, 3.50) significantly higher than programs in the East (mean rank, 6.04; P = .02). No differences were found when programs were compared according to whether they had a dedicated research rotation.

The ultimate aim of the process of reviewing, interviewing, and ranking applicants is to match residents who will perform well in the respective OHNS training programs. The great challenge is to identify those traits in applicants that predict good outcomes—not only while in training but also after residents leave the supervision and structure of the training program. Several articles8,9 on this topic have been written pertaining to OHNS. Unfortunately, this prediction is difficult at this time given the conflicting information available on common performance measures, such as medical school grades and USMLE scores.1013 In the absence of widely accepted and validated predictors of success, OHNS residency programs must use the criteria available to them to identify applicants best suited for their training programs.

The goal of this study was to survey residency programs to identify which factors they considered important in evaluating potential residents. A further, novel aim was to determine whether the goal differed from how applicants expected programs to assess prospective residents. This information provides insight into how programs rank residency candidates and how this compares with applicant expectations. Furthermore, these data will assist applicants in understanding how they may be evaluated by programs to help direct their application efforts.

The interview was considered most important by both the PAs and the PDs. The interview allows PDs and PAs to get a “feel” for each other and to easily disseminate information and ask and answer questions. It also gives each party the opportunity to decide whether one can work with the other for the duration of the training period. Numerous other studies1419 have also shown the prime importance given to the interview in determining the rank list. McCaffrey20 found that medical students believe the interview is the most important criterion by which programs rank applicants, and our study showed that PAs believe it should be the most important factor used to assess an applicant. However, it is still undetermined whether the interview is able to predict subsequent performance during residency.21,22

Interestingly, personal knowledge of the applicant was rated second by the PDs while the PAs ranked this criterion as fifth most important. Our survey did not specifically ask how this personal knowledge was obtained, but in the survey under this criterion, “rotations or research” was given as an example. The PAs ranked letters of recommendation as second while the PDs ranked this criterion as fourth most important. Perhaps PAs believed that because the letters are written by people who knew them well, the letters could be trusted to be an accurate representation of the applicant.

Given the utmost importance placed on personal interaction (either through the interview or personal knowledge of the applicant) by PDs, PAs interested in a particular residency program may want to consider doing a rotation at that site. For every applicant doing his/her OHNS rotation either at home or away, our survey confirms that this rotation is an audition, and an applicant's performance will weigh heavily in the ranking process. This has been found to be true in another competitive specialty, namely, orthopedic surgery, in which away rotations increased the chances of matching.23 Because PAs believed that letters of recommendation were second only to the interview as most important, PDs may want to consider contacting the authors of those letters to gain more information regarding the applicant.

The reputation of a medical school is admittedly a highly subjective assessment. Although some may use the amount of research dollars in the budget or the level of National Institutes of Health funding as a guide, these are not reflective of any individual medical student's performance or potential. Perhaps this is why PAs ranked this lower than PDs, leading to a significant difference in the ranking of this item. One can also consider an applicant's class rank, but this information may not be known by every applicant. In addition, the use of an individual's class rank is fraught with difficulty because there is no way to verify this information until after the applicant graduates, and by this time the match is already completed.

Our study has several limitations. The response rate from programs was 39%, but it is not possible to assess the response rate from applicants because there is no way to know how many OHNS applicants saw or followed the link to the survey on OtoMatch.com. This is an inherent limitation to any study designed to solicit anonymous survey responses, so to help increase survey response, 2 match cycles were studied. Although a larger response pool would further validate these findings, the data obtained from this cross-sectional study provide insight into the match process of selecting applicants.

Because PDs were used as a proxy to evaluate the attitudes of residency programs, it is possible that the beliefs of the PDs do not accurately reflect the beliefs of the faculty within that program. There is also an issue of multiple comparisons because there are 10 different criteria being assessed, and there is an issue of dependence among the factors because the ranks were mutually exclusive. Some respondents may have believed that 2 factors were equally important but were forced by the survey design to assign 1 rank per factor. To minimize this issue, we used a scale of 1 to 20 rather than a scale of 1 to 10 so that those completing the survey would have greater flexibility to discriminate among ranks.

Currently, there is no validated method for identifying and assessing a resident who is the ideal candidate for a residency program. Personality testing, task-based questions related to the responsibilities of a resident, evaluation under simulated stressful conditions, and character assessment are all important facets of applicant review that complement each other, and some have been formally investigated in selecting residents.8,2426 Ideally, a method can be developed that incorporates all of these things to allow for an accurate judgment of a potential resident's performance in a given program.

In conclusion, this study provides insight on the current criteria PDs use to rank PAs and how those criteria compare with the expectations of PAs of how they should be evaluated. The top 5 criteria for both groups were medical school grades, interview, letters of recommendation, personal knowledge of the applicant, and USMLE scores. However, PDs and PAs significantly differed in the order of these criteria.

Correspondence: Walter T. Lee, MD, Division of Otolaryngology–Head and Neck Surgery, Duke University Medical Center (DUMC 3805), Durham, NC 27710 (walter.lee@duke.edu).

Submitted for Publication: July 21, 2011; accepted September 27, 2011.

Author Contributions: All authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Sharp and Lee. Acquisition of data: Puscas, Sharp, and Lee. Analysis and interpretation of data: Puscas, Sharp, Schwab, and Lee. Drafting of the manuscript: Puscas and Lee. Critical revision of the manuscript for important intellectual content: Sharp, Schwab, and Lee. Statistical analysis: Sharp, Schwab, and Lee. Administrative, technical, and material support: Schwab. Study supervision: Lee.

Financial Disclosure: None reported.

Additional Information: Drs Puscas and Sharp contributed equally to this work.

Additional Contributions: Maragatha Kuchibhatla, PhD, Department of Biostatistics and Bioinformatics, Duke University, performed data analysis. Ramon M. Esclamado, MD, provided intellectual contributions to this project.

National Resident Matching Program.  Results and Data: 2010 Main Residency Match. Washington, DC: National Resident Matching Program; April 2010:3
Kelz RR, Mullen JL, Kaiser LR,  et al.  Prevention of surgical resident attrition by a novel selection strategy.  Ann Surg. 2010;252(3):537-543
PubMed
Lee AG, Golnik KC, Oetting TA,  et al.  Re-engineering the resident applicant selection process in ophthalmology: a literature review and recommendations for improvement.  Surv Ophthalmol. 2008;53(2):164-176
PubMed   |  Link to Article
Muffly TM, Penick ER, Tang F,  et al.  Factors used by female pelvic medicine and reconstructive surgery fellowship directors to select their fellows.  Int Urogynecol J. 2010;21(3):349-352
PubMed   |  Link to Article
Sharp S, Puscas L, Schwab B, Lee WT. Comparison of applicant criteria and program expectations for choosing residency programs in the otolaryngology match.  Otolaryngol Head Neck Surg. 2011;144(2):174-179
PubMed   |  Link to Article
National Resident Matching Program.  Results and Data: 2009 Main Residency Match. Washington, DC: National Resident Matching Program; April 2009:3
 America's Best Hospitals 2008 Methodology.  RTI International, 2008. http://www.rti.org/pubs/abhmethod_2008.pdf. Updated July 18, 2008. Accessed July 19, 2011
Prager JD, Myer CM IV, Hayes KM, Myer CM III, Pensak ML. Improving methods of resident selection.  Laryngoscope. 2010;120(12):2391-2398
PubMed   |  Link to Article
Carlson ML, Archibald DJ, Sorom AJ, Moore EJ. Under the microscope: assessing surgical aptitude of otolaryngology residency applicants.  Laryngoscope. 2010;120(6):1109-1113
PubMed
Dirschl DR, Dahners LE, Adams GL, Crouch JH, Wilson FC. Correlating selection criteria with subsequent performance as residents.  Clin Orthop Relat Res. 2002;(399):265-271
PubMed
Takayama H, Grinsell R, Brock D, Foy H, Pellegrini C, Horvath K. Is it appropriate to use core clerkship grades in the selection of residents?  Curr Surg. 2006;63(6):391-396
PubMed   |  Link to Article
Wood PS, Smith WL, Altamaier EM, Tarico VS, Franken EA Jr. A prospective study of cognitive and noncognitive selection criteria as predictors of resident performance.  Invest Radiol. 1990;25(7):761-762
PubMed   |  Link to Article
Yindra KJ, Rosenfeld PS, Donnelly MB. Medical school achievements as predictors of residency performance.  J Med Educ. 1988;63(5):356-363
PubMed
Adams LJ, Brandenburg S, Blake M. Factors influencing internal medicine program directors' decisions about applicants.  Acad Med. 2000;75(5):542-543
PubMed   |  Link to Article
Swanson WS, Harris MC, Master C, Gallagher PR, Mauro AE, Ludwig S. The impact of the interview in pediatric residency selection.  Ambul Pediatr. 2005;5(4):216-220
PubMed   |  Link to Article
Galazka SS, Kikano GE, Zyzanski S. Methods of recruiting and selecting residents for U.S. family practice residencies.  Acad Med. 1994;69(4):304-306
PubMed   |  Link to Article
DeLisa JA, Jain SS, Campagnolo DI. Factors used by physical medicine and rehabilitation residency training directors to select their residents.  Am J Phys Med Rehabil. 1994;73(3):152-156
PubMed   |  Link to Article
Taylor CA, Weinstein L, Mayhew HE. The process of resident selection: a view from the residency director's desk.  Obstet Gynecol. 1995;85(2):299-303
PubMed   |  Link to Article
Meyer DR, Dewan MA. Fellowship selection criteria in ophthalmic plastic and reconstructive surgery.  Ophthal Plast Reconstr Surg. 2010;26(5):357-359
PubMed   |  Link to Article
McCaffrey JC. Medical student selection of otolaryngology–head and neck surgery as a specialty: influences and attitudes.  Otolaryngol Head Neck Surg. 2005;133(6):825-830
PubMed   |  Link to Article
Kandler H, Plutchik R, Conte H, Siegel B. Prediction of performance of psychiatric residents: a three-year follow-up study.  Am J Psychiatry. 1975;132(12):1286-1290
PubMed
Brothers TE, Wetherholt S. Importance of the faculty interview during the resident application process.  J Surg Educ. 2007;64(6):378-385
PubMed   |  Link to Article
Baldwin K, Weidner Z, Ahn J, Mehta S. Are away rotations critical for a successful match in orthopaedic surgery?  Clin Orthop Relat Res. 2009;467(12):3340-3345
PubMed   |  Link to Article
Zardouz S, German MA, Wu EC, Djalilian HR. Personality types of otolaryngology resident applicants as described by the Myers-Briggs Type Indicator.  Otolaryngol Head Neck Surg. 2011;144(5):714-718
PubMed   |  Link to Article
Swanson JA, Antonoff MB, D’Cunha J, Maddaus MA. Personality profiling of the modern surgical trainee: insights into Generation X.  J Surg Educ. 2010;67(6):417-420
PubMed   |  Link to Article
Merlo LJ, Matveevskii AS. Personality testing may improve resident selection in anesthesiology programs.  Med Teach. 2009;31(12):e551-e554
PubMed   |  Link to Article

Figures

Place holder to copy figure label and caption
Graphic Jump Location

Figure. Applicant criteria were ranked on a scale of 1 to 20, with 1 being most important and 20 being not important at all. Numbers indicate the mean rank that criterion received; error bars, standard deviation; asterisk, significant difference between the 2 groups with the associated P value (Wilcoxon rank sum test). USMLE indicates US Medical Licensing Examination.

Tables

Table Graphic Jump LocationTable 1. Demographics of the 83 Program Applicants
Table Graphic Jump LocationTable 2. Demographics of the 41 Residency Programs

References

National Resident Matching Program.  Results and Data: 2010 Main Residency Match. Washington, DC: National Resident Matching Program; April 2010:3
Kelz RR, Mullen JL, Kaiser LR,  et al.  Prevention of surgical resident attrition by a novel selection strategy.  Ann Surg. 2010;252(3):537-543
PubMed
Lee AG, Golnik KC, Oetting TA,  et al.  Re-engineering the resident applicant selection process in ophthalmology: a literature review and recommendations for improvement.  Surv Ophthalmol. 2008;53(2):164-176
PubMed   |  Link to Article
Muffly TM, Penick ER, Tang F,  et al.  Factors used by female pelvic medicine and reconstructive surgery fellowship directors to select their fellows.  Int Urogynecol J. 2010;21(3):349-352
PubMed   |  Link to Article
Sharp S, Puscas L, Schwab B, Lee WT. Comparison of applicant criteria and program expectations for choosing residency programs in the otolaryngology match.  Otolaryngol Head Neck Surg. 2011;144(2):174-179
PubMed   |  Link to Article
National Resident Matching Program.  Results and Data: 2009 Main Residency Match. Washington, DC: National Resident Matching Program; April 2009:3
 America's Best Hospitals 2008 Methodology.  RTI International, 2008. http://www.rti.org/pubs/abhmethod_2008.pdf. Updated July 18, 2008. Accessed July 19, 2011
Prager JD, Myer CM IV, Hayes KM, Myer CM III, Pensak ML. Improving methods of resident selection.  Laryngoscope. 2010;120(12):2391-2398
PubMed   |  Link to Article
Carlson ML, Archibald DJ, Sorom AJ, Moore EJ. Under the microscope: assessing surgical aptitude of otolaryngology residency applicants.  Laryngoscope. 2010;120(6):1109-1113
PubMed
Dirschl DR, Dahners LE, Adams GL, Crouch JH, Wilson FC. Correlating selection criteria with subsequent performance as residents.  Clin Orthop Relat Res. 2002;(399):265-271
PubMed
Takayama H, Grinsell R, Brock D, Foy H, Pellegrini C, Horvath K. Is it appropriate to use core clerkship grades in the selection of residents?  Curr Surg. 2006;63(6):391-396
PubMed   |  Link to Article
Wood PS, Smith WL, Altamaier EM, Tarico VS, Franken EA Jr. A prospective study of cognitive and noncognitive selection criteria as predictors of resident performance.  Invest Radiol. 1990;25(7):761-762
PubMed   |  Link to Article
Yindra KJ, Rosenfeld PS, Donnelly MB. Medical school achievements as predictors of residency performance.  J Med Educ. 1988;63(5):356-363
PubMed
Adams LJ, Brandenburg S, Blake M. Factors influencing internal medicine program directors' decisions about applicants.  Acad Med. 2000;75(5):542-543
PubMed   |  Link to Article
Swanson WS, Harris MC, Master C, Gallagher PR, Mauro AE, Ludwig S. The impact of the interview in pediatric residency selection.  Ambul Pediatr. 2005;5(4):216-220
PubMed   |  Link to Article
Galazka SS, Kikano GE, Zyzanski S. Methods of recruiting and selecting residents for U.S. family practice residencies.  Acad Med. 1994;69(4):304-306
PubMed   |  Link to Article
DeLisa JA, Jain SS, Campagnolo DI. Factors used by physical medicine and rehabilitation residency training directors to select their residents.  Am J Phys Med Rehabil. 1994;73(3):152-156
PubMed   |  Link to Article
Taylor CA, Weinstein L, Mayhew HE. The process of resident selection: a view from the residency director's desk.  Obstet Gynecol. 1995;85(2):299-303
PubMed   |  Link to Article
Meyer DR, Dewan MA. Fellowship selection criteria in ophthalmic plastic and reconstructive surgery.  Ophthal Plast Reconstr Surg. 2010;26(5):357-359
PubMed   |  Link to Article
McCaffrey JC. Medical student selection of otolaryngology–head and neck surgery as a specialty: influences and attitudes.  Otolaryngol Head Neck Surg. 2005;133(6):825-830
PubMed   |  Link to Article
Kandler H, Plutchik R, Conte H, Siegel B. Prediction of performance of psychiatric residents: a three-year follow-up study.  Am J Psychiatry. 1975;132(12):1286-1290
PubMed
Brothers TE, Wetherholt S. Importance of the faculty interview during the resident application process.  J Surg Educ. 2007;64(6):378-385
PubMed   |  Link to Article
Baldwin K, Weidner Z, Ahn J, Mehta S. Are away rotations critical for a successful match in orthopaedic surgery?  Clin Orthop Relat Res. 2009;467(12):3340-3345
PubMed   |  Link to Article
Zardouz S, German MA, Wu EC, Djalilian HR. Personality types of otolaryngology resident applicants as described by the Myers-Briggs Type Indicator.  Otolaryngol Head Neck Surg. 2011;144(5):714-718
PubMed   |  Link to Article
Swanson JA, Antonoff MB, D’Cunha J, Maddaus MA. Personality profiling of the modern surgical trainee: insights into Generation X.  J Surg Educ. 2010;67(6):417-420
PubMed   |  Link to Article
Merlo LJ, Matveevskii AS. Personality testing may improve resident selection in anesthesiology programs.  Med Teach. 2009;31(12):e551-e554
PubMed   |  Link to Article

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 4

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections