Digital Commons @ University of South Florida

  • USF Research
  • USF Libraries

Digital Commons @ USF > Theses and Dissertations

Educational Measurement and Research Theses and Dissertations

Theses/dissertations from 2023 2023.

Exploring Time-Varying Extraneous Variables Effects in Single-Case Studies , Ke Cheng

Identifying Contributors to Disproportionality: The Influence of Perception on Student Social, Emotional, and Academic Behavior Ratings , Chelsea Salvatore

Transformation Zone Schools and School Change Processes: Experiences of Families , Jesse Strong

Using Social Network Analysis to Measure and Visualize Student Clustering Within Middle and High Schools , Geoffrey David West

Theses/Dissertations from 2022 2022

Psychometric Characteristics of Academic Language Discourse Analysis Tools , Courtney (Cici) Brianna Claar

The Impact of Teachers’ Treatment Fidelity to the Good Behavior Game on Problem Behaviors Exhibited within a Self-Contained Classroom Setting , Jennifer M. Hodnett

Distributed Leadership: Formal Leadership, Barriers, and Facilitators for Multi-Tiered Systems of Support , Joseph D. Latimer

Development and Initial Validation of the Parent and Family Engagement in Higher Education Measure , Michelle R. Mcnulty

Theses/Dissertations from 2021 2021

Retaining and Supporting Graduate Racially Minoritized Students: A Critical Analysis , Patricia Y. Gills

Educators’ Sensemaking of Data within an MTSS Framework: An Exploratory Case Study , Stephanie Marie Green

Native and Non-native English-speaking Doctoral Students' Strategies to Understand Idiomatics in Comics and Comic Strips , Himelhoch Luz-Aydé

Relationship Between Working with Professional Evaluators and an Organization’s Evaluation Culture , James M. Wharton

Impacts of Model Misspecification on Individual-Level Differential Item Functioning Detection in Multilevel Data Using Hierarchical Generalized Linear Model , Yue Yin

Theses/Dissertations from 2020 2020

Predictive Validity of Standards-based and Curriculum-embedded Assessments for Predicting Readiness at Kindergarten Entry , Elizabeth Ashton DeCamilla

Examining Role Breadth, Efficacy, and Attitudes Toward Trauma-Informed Care in Elementary School Educators , Mikayla J. Drymond

Performance Based Funding and the Florida State University System: An Exploratory Analysis , Laura A. Hoffman

Distributed Leadership: Leadership Teams and Implementing Multi-Tiered Systems of Support , Joseph D. Latimer

Adolescent Asthma and School Disparities: An Ecological Perspective of Students and Stakeholders , Tali Schneider

Intensive Longitudinal Data Analyses and Sample Size Considerations in Intervention Studies with Dynamic Structural Equation Modeling , Zhiyao Yi

Theses/Dissertations from 2019 2019

Exploring the Behavior of Model Fit Criteria in the Bayesian Approximate Measurement Invariance: A Simulation Study , Abeer Atallah S. Alamri

Statistical Models to Test Measurement Invariance with Paired and Partially Nested Data: A Monte Carlo Study , Diep Thi Nguyen

Theses/Dissertations from 2018 2018

Case Study of a Collaborative Approach to Evaluation Within a School District Central Office , Oriana Eversole

Application of the Fusion Model for Cognitive Diagnostic Assessment with Non-diagnostic Algebra-Geometry Readiness Test Data , Robert H. Fay

PRE-SERVICE TEACHER MICRO-HEGEMONIC CONSTRUCTION OF LITERACY TEACHER IDENTITY , Brian M. Flores

Assessing Readiness to Seek Formal Mental Health Services: Development and Initial Validation of the Mental Health Belief Model Assessment (MHBMA) , Jennifer A. Greene

Missing Data in Complex Sample Surveys: Impact of Deletion and Imputation Treatments on Point and Interval Parameter Estimates , Anh Pham Kellermann

Adult College Students' Perceptions about Learning Mathematics via Developmental Mathematical xMOOCs , Pelagia Alesafis Kilgore

Adolescents’ Perceptions of Parenting Practices and their Influence on Success, Academic Motivation, and School Belonging , David Rubio Jr.

Covariates in Factor Mixture Modeling: Investigating Measurement Invariance across Unobserved Groups , Yan Wang

Predictive Validity of Florida’s Postsecondary Education Readiness Test , Alisa Murphy Žujović

Theses/Dissertations from 2017 2017

Faculty Perceptions of the Quality Enhancement Plan in a US Public Doctoral University with Highest Research Activity: A Case Study , Maha Alamoud

Exploring the Test of Covariate Moderation Effect and the Impact of Model Misspecification in Multilevel MIMIC Models , Chunhua Cao

The Empirical Selection of Anchor Items Using a Multistage Approach , Brandon Craig

Robustness of the Within- and Between-Series Estimators to Non-Normal Multiple-Baseline Studies: A Monte Carlo Study , Seang-Hwane Joo

Extending the Model with Internal Restrictions on Item Difficulty (MIRID) to Study Differential Item Functioning , Yong "Isaac" Li

The performance of Multilevel Structural Equation Modeling (MSEM) in comparison to Multilevel Modeling (MLM) in multilevel mediation analysis with non-normal data , Thanh Vinh Pham

Theses/Dissertations from 2016 2016

Examinees' Perceptions of the Physical Aspects of the Testing Environment During the National Physical Therapy Examination , Ellen Kroog Donald

A Comparison of Educational "Value-Added" Methodologies for Classifying Teacher Effectiveness: Value Tables vs. Covariate Regression , Theodore J. Dwyer

Theses/Dissertations from 2015 2015

An Empirical Comparison of the Effect of Missing Data on Type I Error and Statistical Power of the Likelihood Ratio Test for Differential Item Functioning: An Item Response Theory Approach using the Graded Response Model , Patricia Rodriguez De Gil

Theses/Dissertations from 2014 2014

Documenting Elementary Teachers' Sustainability of Instructional Practices: A Mixed Method Case Study , Bridget Cotner

Validation of the Scores of the Instructional Pedagogical and Instructional Student Engagement Components of Fidelity of Implementation , Sandra F. Naoom

A Monte Carlo Study: The Consequences of the Misspecification of the Level-1 Error Structure , Merlande Petit-Bois

Theses/Dissertations from 2013 2013

Effectiveness of Propensity Score Methods in a Multilevel Framework: A Monte Carlo Study , Aarti P. Bellara

The Relationship between Rating Scales used to Evaluate Tasks from Task Inventories for Licensure and Certification Examinations , Adrienne W. Cadle

An Analysis of Factor Extraction Strategies: A Comparison of the Relative Strengths of Principal Axis, Ordinary Least Squares, and Maximum Likelihood in Research Contexts that Include both Categorical and Continuous Variables , Kevin Barry Coughlin

Value-Added and Observational Measures Used in the Teacher Evaluation Process: A Validation Study , Claudia Güerere

The Performance of the Linear Logistic Test Model When the Q-Matrix is Misspecified: A Simulation Study , George T. Macdonald

Translation, Adaptation and Invariance Testing of the Teaching Perspectives Inventory: Comparing Faculty of Malaysia and the United States , Jecky Misieng

Theses/Dissertations from 2012 2012

Sensitivity of Value Added School Effect Estimates to Different Model Specifications and Outcome Measures , Bryce L. Pride

Theses/Dissertations from 2011 2011

An examination of the relationship between Elementary Education Teacher Candidates' authentic assessments and performance on the Professional Education Subtests on the Florida Teacher Certification Exam (FTCE) , Thomas Raymond Lang Ii

Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model , Corina M. Owens

Theses/Dissertations from 2010 2010

Exploring Evaluation in School Districts: School District Evaluators and Their Practice , Susan Hibbard

Theses/Dissertations from 2009 2009

Multiple approaches to the validation of the scores from the study anxiety inventory , George Douglas Lunsford

Shyness in the Context of Reduced Fear of Negative Evaluation and SelfFocus: A Mixed Methods Case Study , Freda S. Watson

Theses/Dissertations from 2008 2008

Schools as Moderators of Neighborhood Influences on Adolescent Academic Achievement and Risk of Obesity: A Cross-Classified Multilevel Investigation , Bethany A. Bell-Ellison

A Monte Carlo Approach for Exploring the Generalizability of Performance Standards , James Thomas Coraggio

The Robustness of Rasch True Score Preequating to Violations of Model Assumptions Under Equivalent and Nonequivalent Populations , Garron Gianopulos

Correlates of Mathematics Achievement in Developed and Developing Countries: An HLM Analysis of TIMSS 2003 Eighth-Grade Mathematics Scores , Ha T. Phan

A Mixed Methods Approach to Examining Factors Related to Time to Attainment of the Doctorate in Education , Hesborn Otieno Wao

Theses/Dissertations from 2007 2007

The Tip of the Blade: Self-Injury Among Early Adolescents , Moya L. Alfonso

Examining the issues surrounding violating the assumption of independent observations in reliability generalization studies: A simulation study , Jeanine L. Romano

Theses/Dissertations from 2006 2006

A comparability analysis of the National Nurse Aide Assessment Program , Peggy K. Jones

Detecting publication bias in random effects meta-analysis: An empirical comparison of statistical methods , Gianna Rendina-Gobioff

Theses/Dissertations from 2005 2005

Investigating the Hispanic/Latino Male Dropout Phenomenon: Using Logistic Regression and Survival Analysis , Dorian Charles Vizcain

Theses/Dissertations from 2004 2004

Teachers’ Mathematics Preparation and Eighth Grade Student Mathematics Achievement: Can an Integrated Learning System Provide Support When Teachers’ Professional Preparation is Limited? , Christine Kerstyn

The Construct Validation of an Instrument Based on Students’ University Choice and their Perceptions of Professor Effectiveness and Academic Reputation at the University of Los Andes , Josefa Maria Montilla

The Stress Response, Psychoeducational Interventions and Assisted Reproduction Technology Treatment Outcomes: A Meta-Analytic Review , Karen Rose Mumford

Teacher Satisfaction in Public, Private, and Charter Schools: A Multi-Level Analysis , Christina Sentovich

Theses/Dissertations from 2003 2003

Estimates of Statistical Power and Accuracy for Latent Trajectory Class Enumeration in the Growth Mixture Model , Eric C. Brown

Effect Sizes, Significance Tests, and Confidence Intervals: Assessing the Influence and Impact of Research Reporting Protocol and Practice , Melinda Rae Hess

Risky Predictions, Damn Strange Coincidences, and Theory Appraisal: A Multivariate Corroboration Index for Path Analytic Models , Kristine Y. Hogarty

Perception of Leadership Qualities in Higher Education: Impact of Professor Gender, Professor Leader Style, Situation, and Participant Gender , Michela A. LaRocca

Advanced Search

  • Email Notifications and RSS
  • All Collections
  • USF Faculty Publications
  • Open Access Journals
  • Conferences and Events
  • Theses and Dissertations
  • Textbooks Collection

Useful Links

  • Rights Information
  • SelectedWorks
  • Submit Research

Home | About | Help | My Account | Accessibility Statement | Language and Diversity Statements

Privacy Copyright

BYU ScholarsArchive

BYU ScholarsArchive

Home > Education > Educational Inquiry, Measurement, and Evaluation > Theses and Dissertations

Educational Inquiry, Measurement, and Evaluation Theses and Dissertations

Theses/dissertations from 2023 2023.

Evaluating the Impact of Math Self-Efficacy, Math Self-Concept, and Gender on STEM Enrollment and Retention in Postsecondary Education , Marcia Bingham

Multilevel Mixture IRT Modeling for the Analysis of Differential Item Functioning , Luke Dras

An Analysis of the Factor Structure and Measurement Invariance of the Performance Assessment and Evaluation System Ratings of Preservice Teachers , Anna Kay Steadman

Theses/Dissertations from 2022 2022

Math Achievement Opportunity for American Mexican Children in Mexico: A Structural Equation Modeling Analysis Using Multilevel Data , Jimmy E. Hernandez

A Comparison of Methods of Rating Creative Writing: A Many-Facets Rasch and User Experience Analysis , Alicia McIntire

High School Dropouts, Higher Education Dreams, and Achievement: A Six-Year Study of a High-Stakes Test in Brazil , Eveline Miranda

Theses/Dissertations from 2021 2021

The Development of a Social and Emotional Well-Being Scale Using ESEM and CFA: Synergistic Stories in Complex Models , Christopher Hughes Busath

The Practice of Belonging: Can Learning Entrepreneurship Accelerate and Aid the Social Inclusion of Refugees in the United States , Jabra F. Ghneim

Longitudinal Measurement Invariance of the Outcome Questionnaire-45 , Shiloh Marie Howland

A Psychometric Analysis of the Precalculus Concept Assessment , Brian Lindley Jones

A Self-Study of the Shifts in Teacher Educator Knowledge Resulting From the Move From In-Person to Online Instruction , Celina Dulude Lay

What Did You Say? Investigating the Relationship of Self-Perceived Communication Competence and Mindfulness in Communication on Levels of Organizational Trust in a Postsecondary Academic Library , Rebecca Jo Peterson

Professional Development and Change in Teachers' Beliefs and Practice for Teaching English Language Learners , Kerong Wu

Theses/Dissertations from 2020 2020

Factor Structure of the Jordan Performance Appraisal System: A Multilevel Multigroup Study Using Categorical and Count Data , Holly Lee Allen

Special Education Teacher Burnout: A Factor Analysis , Heidi Celeste Bussey

Evaluating Model Fit for Longitudinal Measurement Invariance with Ordered Categorical Indicators , Jonathan Caleb Clark

Factors Influencing Teacher Survival in the Beginning Teacher Longitudinal Study , Lisa McLachlan

Designing Software to Unify Person-Fit Assessment , Phillip Isaac Pfleger

Theses/Dissertations from 2019 2019

Testing a Scale of Teacher Beliefs About Universal Curriculum Integration in the 21st Century (UCI21-T) , Nicole E. Anderson

A Validity Study of the Cognitively Guided Instruction Teacher Knowledge Assessment , Debra Smith Fuentes

Recommendations Regarding Q-Matrix Design and Missing Data Treatment in the Main Effect Log-Linear Cognitive Diagnosis Model , Rui Ma

Understanding STEM Faculty Members' Decisions About Evidence-Based Instructional Practices , Rebecca Louise Sansom

Theses/Dissertations from 2018 2018

Estimating the Reliability of Scores from a Social Network Survey Questionnaire in Light of Actor, Alter, and Dyad Clustering Effects , Timothy Dean Walker

Theses/Dissertations from 2017 2017

Alternative Methods of Estimating the Degree of Uncertainty in Student Ratings of Teaching , Ala'a Mohammad Alsarhan

Identity, Power, and Conflict in Preschool Teaching Teams , Esther Marshall

A Case Study on the Facilitation of Positive Behavior Interventions and Supports in a Public Elementary School , John T. Shumway

An Examination of the Psychometric Properties of the Trauma Inventory for Partners of Sex Addicts (TIPSA) , Steven Scott Stokes

An Examination of the Factor Structure of the Trauma Inventory for Partners of Sex Addicts (TIPSA) , Heidi A. Vogeler

Theses/Dissertations from 2016 2016

The Impact of Shortening a Long Survey on Response Rate and Response Quality , Daniel Stephen Allen

Student Growth Trajectories with Summer Achievement Loss Using Hierarchical and Growth Modeling , Sara Bernice Chapman

A Multiple-Cutoff Regression-Discontinuity Analysis of the Effects of Tier 2 Reading Interventions in a Title I Elementary School , Eli A. Jones

An Examination of the Psychometric Properties of the Student Risk Screening Scale for Internalizing and Externalizing Behaviors: An Item Response Theory Approach , Sara E. Moulton

Evidence for the Validity of the Student Risk Screening Scale in Middle School: A Multilevel Confirmatory Factor Analysis , Matthew Porter Wilcox

Theses/Dissertations from 2015 2015

Does an iPad Change the Experience? A Look at Mother-Child Book Reading Interactions , Kathryn L. MacKay

Effect of Gender, Guilt, and Shame on BYU Business School Students' Innovation: Structural Equation Modeling Approach , Rasha Mohsen Qudisat

Supporting an Understanding of Mathematics Teacher Educators:Identifying Shared Beliefs and Ways of Enacting Their Craft , Joseph S. Rino

The Effects of Open Educational Resource Adoption on Measures of Post-Secondary Student Success , Thomas J. Robinson

Using Social Validity to Examine Teacher Perspectives of Positive Behavior Intervention Support Programs: A Quasi-Replication Study , Jason Leonard Wright

Theses/Dissertations from 2014 2014

A Desk Study of the Education Policy Implications of Using Data from Multiple Sources: Example of Primary School Teacher Supply and Demand in Malawi , Moses Khombe

Theses/Dissertations from 2013 2013

A Model of Digital Textbook Quality from the Perspective of College Students , TJ Bliss

The Development and Validation of a Spanish Elicited imitation Test of Oral Language Proficiency for the Missionary Training Center , Carrie A. Thompson

Theses/Dissertations from 2012 2012

Propensity Score Methods as Alternatives to Value-Added Modeling for the Estimation of Teacher Contributions to Student Achievement , Kimberlee Kaye Davison

Evaluation of the Effectiveness of the Students and Teachers Achieving Reading Success Program for First Graders , Whitney Ann Phillips

Communication Patterns Among Members of Engineering Global Virtual Teams , Holt Zaugg

Theses/Dissertations from 2011 2011

Preservice Special Education Teachers' Beliefs About Effective Reading Instruction for Students with Mild/Moderate Disabilities , Nari Carter

Theses/Dissertations from 1979 1979

The Development and Evaluation of a Children's Gospel Principles Course , Lynn R. Applegate

Theses/Dissertations from 1963 1963

The Effectiveness of Home Night as a Supplement to LDS Seminary instruction , Joseph L. Allen

Theses/Dissertations from 1960 1960

A Study of the Effectiveness of Bibliotherapy to Effect a Change of Attitudes as Measured Statistically , June Gracey Whiteford

Advanced Search

  • Notify me via email or RSS

ScholarsArchive ISSN: 2572-4479

  • Collections
  • Disciplines
  • Scholarly Communication
  • Additional Collections
  • Academic Research Blog

Author Corner

Hosted by the.

  • Harold B. Lee Library

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation

  • Edited by: Bruce B. Frey
  • Publisher: SAGE Publications, Inc.
  • Publication year: 2018
  • Online pub date: June 05, 2018
  • Discipline: Education
  • Methods: Educational research , Measurement
  • DOI: https:// doi. org/10.4135/9781506326139
  • Keywords: educational research , encyclopedia , encyclopedia , evaluation , measurement Show all Show less
  • Print ISBN: 9781506326153
  • Online ISBN: 9781506326139
  • Buy the book icon link

Reader's guide

Entries a-z, subject index.

In an era of curricular changes and experiments and high-stakes testing, educational measurement and evaluation is more important than ever. In addition to expected entries covering the basics of traditional theories and methods, other entries discuss important sociopolitical issues and trends influencing the future of that research and practice. Textbooks, handbooks, monographs and other publications focus on various aspects of educational research, measurement and evaluation, but to date, there exists no major reference guide for students new to the field. This comprehensive work fills that gap, covering traditional areas while pointing the way to future developments. Features: Nearly 700 signed entries are contained in an authoritative work spanning four volumes and available in choice of electronic and/or print formats. Although organized A-to-Z, front matter includes a Reader’s Guide grouping entries thematically to help students interested in a specific aspect of education research, measurement, and evaluation to more easily locate directly related entries. (For instance, sample themes include Data, Evaluation, Measurement Concepts & Issues, Research, Sociopolitical Issues, Standards.) Back matter includes a Chronology of the development of the field; a Resource Guide to classic books, journals, and associations; and a detailed Index. Entries conclude with References/Further Readings and Cross References to related entries. The Index, Reader’s Guide themes, and Cross References will combine to provide robust search-and-browse in the e-version.

Front Matter

  • Editorial Board
  • List of Entries
  • Reader’s Guide
  • About the Editor
  • Contributors
  • Introduction

Reader’s Guide

Back matter.

  • Appendix A: Resource Guide
  • Appendix B: Chronology
  • Standards for Educational and Psychological Testing
  • Accessibility of Assessment
  • Accommodations
  • African Americans and Testing
  • Asian Americans and Testing
  • Ethical Issues in Testing
  • Gender and Testing
  • High-Stakes Tests
  • Latinos and Testing
  • Minority Issues in Testing
  • Second Language Learners, Assessment of
  • Test Security
  • Testwiseness
  • Ability Tests
  • Achievement Tests
  • Adaptive Behavior Assessments
  • Admissions Tests
  • Alternate Assessments
  • Aptitude Tests
  • Attenuation, Correction for
  • Attitude Scaling
  • Basal Level and Ceiling Level
  • Buros Mental Measurements Yearbook
  • Classification
  • Cognitive Diagnosis
  • Computer-Based Testing
  • Computerized Adaptive Testing
  • Confidence Interval
  • Curriculum-Based Assessment
  • Diagnostic Tests
  • Difficulty Index
  • Discrimination Index
  • English Language Proficiency Assessment
  • Formative Assessment
  • Intelligence Tests
  • Interquartile Range
  • Minimum Competency Testing
  • Personality Assessment
  • Power Tests
  • Progress Monitoring
  • Projective Tests
  • Psychometrics
  • Reading Comprehension Assessments
  • Screening Tests
  • Self-Report Inventories
  • Sociometric Assessment
  • Speeded Tests
  • Standards-Based Assessment
  • Summative Assessment
  • Technology-Enhanced Items
  • Test Battery
  • Testing, History of
  • Value-Added Models
  • Written Language Assessment
  • Authentic Assessment
  • Backward Design
  • Bloom’s Taxonomy
  • Classroom Assessment
  • Constructed-Response Items
  • Curriculum-Based Measurement
  • Essay Items
  • Fill-in-the-Blank Items
  • Game-Based Assessment
  • Matching Items
  • Multiple-Choice Items
  • Paper-and-Pencil Assessment
  • Performance-Based Assessment
  • Portfolio Assessment
  • Selection Items
  • Student Self-Assessment
  • Supply Items
  • Technology in Classroom Assessment
  • True-False Items
  • Universal Design of Assessment
  • a Parameter
  • b Parameter
  • c Parameter
  • Conditional Standard Error of Measurement
  • Differential Item Functioning
  • Item Information Function
  • Item Response Theory
  • Multidimensional Item Response Theory
  • Rasch Model
  • Test Information Function
  • Testlet Response Theory
  • Coefficient Alpha
  • Decision Consistency
  • Inter-Rater Reliability
  • Internal Consistency
  • Kappa Coefficient of Agreement
  • Phi Coefficient (in Generalizability Theory)
  • Reliability
  • Spearman-Brown Prophecy Formula
  • Split-Half Reliability
  • Test–Retest Reliability
  • Age Equivalent Scores
  • Analytic Scoring
  • Automated Essay Evaluation
  • Criterion-Referenced Interpretation
  • Grade-Equivalent Scores
  • Guttman Scaling
  • Holistic Scoring
  • Intelligence Quotient
  • Interval-Level Measurement
  • Ipsative Scales
  • Levels of Measurement
  • Likert Scaling
  • Multidimensional Scaling
  • Nominal-Level Measurement
  • Norm-Referenced Interpretation
  • Normal Curve Equivalent Score
  • Ordinal-Level Measurement
  • Percentile Rank
  • Primary Trait Scoring
  • Propensity Scores
  • Rating Scales
  • Reverse Scoring
  • Score Reporting
  • Semantic Differential Scaling
  • Standardized Scores
  • Thurstone Scaling
  • Visual Analog Scales
  • W Difference Scores
  • Bayley Scales of Infant and Toddler Development
  • Beck Depression Inventory
  • Dynamic Indicators of Basic Early Literacy Skills
  • Educational Testing Service
  • Iowa Test of Basic Skills
  • Kaufman-ABC Intelligence Test
  • Minnesota Multiphasic Personality Inventory
  • National Assessment of Educational Progress
  • Partnership for Assessment of Readiness for College and Careers
  • Peabody Picture Vocabulary Test
  • Programme for International Student Assessment
  • Progress in International Reading Literacy Study
  • Raven’s Progressive Matrices
  • Smarter Balanced Assessment Consortium
  • Standardized Tests
  • Stanford-Binet Intelligence Scales
  • Torrance Tests of Creative Thinking
  • Trends in International Mathematics and Science Study
  • Wechsler Intelligence Scales
  • Woodcock-Johnson Tests of Achievement
  • Woodcock-Johnson Tests of Cognitive Ability
  • Woodcock-Johnson Tests of Oral Language
  • Concurrent Validity
  • Consequential Validity Evidence
  • Construct Irrelevance
  • Construct Underrepresentation
  • Content-Related Validity Evidence
  • Criterion-Based Validity Evidence
  • Measurement Invariance
  • Multicultural Validity
  • Multitrait–Multimethod Matrix
  • Predictive Validity
  • Sensitivity
  • Social Desirability
  • Specificity
  • Unitary View of Validity
  • Validity Coefficients
  • Validity Generalization
  • Validity, History of
  • Critical Thinking
  • Learned Helplessness
  • Locus of Control
  • Long-Term Memory
  • Metacognition
  • Problem Solving
  • Self-Efficacy
  • Self-Regulation
  • Short-Term Memory
  • Working Memory
  • Data Visualization Methods
  • Graphical Modeling
  • Scatterplots
  • Asperger’s Syndrome
  • Attention-Deficit/Hyperactivity Disorder
  • Autism Spectrum Disorder
  • Bipolar Disorder
  • Developmental Disabilities
  • Intellectual Disability and Postsecondary Education
  • Learning Disabilities
  • F Distribution
  • Areas Under the Normal Curve
  • Bernoulli Distribution
  • Distributions
  • Moments of a Distribution
  • Normal Distribution
  • Poisson Distribution
  • Posterior Distribution
  • Prior Distribution
  • Brown v. Board of Education
  • Adequate Yearly Progress
  • Americans with Disabilities Act
  • Coleman Report
  • Common Core State Standards
  • Corporal Punishment
  • Every Student Succeeds Act
  • Family Educational Rights and Privacy Act
  • Great Society Programs
  • Health Insurance Portability and Accountability Act
  • Individualized Education Program
  • Individuals With Disabilities Education Act
  • Least Restrictive Environment
  • No Child Left Behind Act
  • Policy Research
  • Race to the Top
  • School Vouchers
  • Special Education Identification
  • Special Education Law
  • State Standards
  • Advocacy in Evaluation
  • Collaboration, Evaluation of
  • Conceptual Framework
  • Evaluation Versus Research
  • Evaluation, History of
  • Feasibility
  • Goals and Objectives
  • Logic Models
  • Program Theory of Change
  • Stakeholders
  • Triangulation
  • Appreciative Inquiry
  • CIPP Evaluation Model
  • Collaborative Evaluation
  • Consumer-Oriented Evaluation Approach
  • Cost–Benefit Analysis
  • Culturally Responsive Evaluation
  • Democratic Evaluation
  • Developmental Evaluation
  • Empowerment Evaluation
  • Evaluation Capacity Building
  • Evidence-Centered Design
  • External Evaluation
  • Feminist Evaluation
  • Formative Evaluation
  • Four-Level Evaluation Model
  • Goal-Free Evaluation
  • Internal Evaluation
  • Needs Assessment
  • Participatory Evaluation
  • Personnel Evaluation
  • Policy Evaluation
  • Process Evaluation
  • Program Evaluation
  • Responsive Evaluation
  • Success Case Method
  • Summative Evaluation
  • Utilization-Focused Evaluation
  • Adolescence
  • Cognitive Development, Theory of
  • Erikson’s Stages of Psychosocial Development
  • Kohlberg’s Stages of Moral Development
  • Parenting Styles
  • Accreditation
  • Angoff Method
  • Body of Work Method
  • Bookmark Method
  • Construct-Related Validity Evidence
  • Content Analysis
  • Content Standard
  • Content Validity Ratio
  • Curriculum Mapping
  • Ebel Method
  • Instructional Sensitivity
  • Item Analysis
  • Item Banking
  • Item Development
  • Learning Maps
  • Modified Angoff Method
  • Proficiency Levels in Language
  • Readability
  • Score Linking
  • Standard Setting
  • Table of Specifications
  • Vertical Scaling
  • American Educational Research Association
  • American Evaluation Association
  • American Psychological Association
  • Institute of Education Sciences
  • Interstate School Leaders Licensure Consortium Standards
  • Joint Committee on Standards for Educational Evaluation
  • National Council on Measurement in Education
  • National Science Foundation
  • Office of Elementary and Secondary Education
  • Organisation for Economic Co-operation and Development
  • Teachers’ Associations
  • U.S. Department of Education
  • World Education Research Association
  • Diagnostic and Statistical Manual of Mental Disorders
  • Guiding Principles for Evaluators
  • Accountability
  • Certification
  • Classroom Observations
  • Confidentiality
  • Conflict of Interest
  • Data-Driven Decision Making
  • Educational Researchers, Training of
  • Ethical Issues in Educational Research
  • Ethical Issues in Evaluation
  • Evaluation Consultants
  • Federally Sponsored Research and Programs
  • Framework for Teaching
  • Professional Development of Teachers
  • Professional Learning Communities
  • School Psychology
  • Teacher Evaluation
  • Demographics
  • Dissertations
  • Literature Review
  • Methods Section
  • Research Proposals
  • Results Section
  • Delphi Technique
  • Discourse Analysis
  • Document Analysis
  • Ethnography
  • Field Notes
  • Focus Groups
  • Grounded Theory
  • Historical Research
  • Interviewer Bias
  • Market Research
  • Member Check
  • Narrative Research
  • Naturalistic Inquiry
  • Participant Observation
  • Qualitative Data Analysis
  • Qualitative Research Methods
  • Transcription
  • Trustworthiness
  • Applied Research
  • Aptitude-Treatment Interaction
  • Causal Inference
  • Ecological Validity
  • External Validity
  • File Drawer Problem
  • Fraudulent and Misleading Data
  • Generalizability
  • Hypothesis Testing
  • Impartiality
  • Interaction
  • Internal Validity
  • Objectivity
  • Order Effects
  • Representativeness
  • Response Rate
  • Scientific Method
  • Type III Error
  • ABA Designs
  • Action Research
  • Case Study Method
  • Causal-Comparative Research
  • Cross-Cultural Research
  • Crossover Design
  • Design-Based Research
  • Double-Blind Design
  • Experimental Designs
  • Gain Scores, Analysis of
  • Latin Square Design
  • Meta-Analysis
  • Mixed Methods Research
  • Monte Carlo Simulation Studies
  • Nonexperimental Designs
  • Pilot Studies
  • Posttest-Only Control Group Design
  • Pre-experimental Designs
  • Pretest–Posttest Designs
  • Quasi-Experimental Designs
  • Regression Discontinuity Analysis
  • Repeated Measures Designs
  • Single-Case Research
  • Solomon Four-Group Design
  • Split-Plot Design
  • Static Group Design
  • Time Series Analysis
  • Triple-Blind Studies
  • Twin Studies
  • Zelen’s Randomized Consent Design
  • Cluster Sampling
  • Control Variables
  • Convenience Sampling
  • Expert Sampling
  • Judgment Sampling
  • Markov Chain Monte Carlo Methods
  • Quantitative Research Methods
  • Quota Sampling
  • Random Assignment
  • Random Selection
  • Replication
  • Simple Random Sampling
  • Snowball Sampling
  • Stratified Random Sampling
  • Survey Methods
  • Systematic Sampling
  • Bubble Drawing
  • C Programming Languages
  • Collage Technique
  • Computer Programming in Quantitative Analysis
  • Concept Mapping
  • HyperResearch
  • Johari Window
  • 45 CFR Part 46
  • Belmont Report
  • Cultural Competence
  • Deception in Human Subjects Research
  • Declaration of Helsinki
  • Falsified Data in Large-Scale Surveys
  • Flynn Effect
  • Human Subjects Protections
  • Human Subjects Research, Definition of
  • Informed Consent
  • Institutional Review Boards
  • Nuremberg Code
  • Service-Learning
  • Social Justice
  • STEM Education
  • Matrices (in Social Network Analysis)
  • Network Centrality
  • Network Cohesion
  • Network Density
  • Social Network Analysis
  • Social Network Analysis Using R
  • Bayes’s Theorem
  • Bayesian Statistics
  • Marginal Maximum Likelihood Estimation
  • Maximum Likelihood Estimation
  • Analysis of Covariance
  • Analysis of Variance
  • Binomial Test
  • Canonical Correlation
  • Chi-Square Test
  • Cluster Analysis
  • Cochran Q Test
  • Confirmatory Factor Analysis
  • Cramér’s V Coefficient
  • Descriptive Statistics
  • Discriminant Function Analysis
  • Exploratory Factor Analysis
  • Fisher Exact Test
  • Friedman Test
  • Goodness-of-Fit Tests
  • Hierarchical Regression
  • Inferential Statistics
  • Kolmogorov-Smirnov Test
  • Kruskal-Wallis Test
  • Levene’s Homogeneity of Variance Test
  • Logistic Regression
  • Mann-Whitney Test
  • Mantel-Haenszel Test
  • McNemar Change Test
  • Measures of Central Tendency
  • Measures of Variability
  • Median Test
  • Mixed Model Analysis of Variance
  • Multiple Linear Regression
  • Multivariate Analysis of Variance
  • Part Correlations
  • Partial Correlations
  • Path Analysis
  • Pearson Correlation Coefficient
  • Phi Correlation Coefficient
  • Repeated Measures Analysis of Variance
  • Simple Linear Regression
  • Spearman Correlation Coefficient
  • Standard Error of Measurement
  • Stepwise Regression
  • Structural Equation Modeling
  • Survival Analysis
  • Two-Way Analysis of Variance
  • Two-Way Chi-Square
  • Wilcoxon Signed Ranks Test
  • Alpha Level
  • Autocorrelation
  • Bonferroni Procedure
  • Bootstrapping
  • Categorical Data Analysis
  • Central Limit Theorem
  • Conditional Independence
  • Convergence
  • Correlation
  • Data Mining
  • Dummy Variables
  • Effect Size
  • Estimation Bias
  • Eta Squared
  • Gauss-Markov Theorem
  • Holm’s Sequential Bonferroni Procedure
  • Latent Class Analysis
  • Local Independence
  • Longitudinal Data Analysis
  • Matrix Algebra
  • Mediation Analysis
  • Missing Data Analysis
  • Multicollinearity
  • Parameter Invariance
  • Parameter Mean Squared Error
  • Parameter Random Error
  • Post Hoc Analysis
  • Power Analysis
  • Probit Transformation
  • Robust Statistics
  • Sample Size
  • Significance
  • Simpson’s Paradox
  • Standard Deviation
  • Type I Error
  • Type II Error
  • Winsorizing
  • Cross-Classified Models
  • Diagnostic Classification Models
  • Dyadic Data Analysis
  • Generalized Linear Mixed Models
  • Growth Curve Modeling
  • Hierarchical Linear Modeling
  • Model–Data Fit
  • Active Learning
  • Bilingual Education, Research on
  • College Success
  • Constructivist Approach
  • Cooperative Learning
  • Distance Learning
  • Evidence-Based Interventions
  • Homeschooling
  • Instructional Objectives
  • Instructional Rounds
  • Kindergarten
  • Kinesthetic Learning
  • Learning Progressions
  • Learning Styles
  • Learning Theories
  • Mastery Learning
  • Montessori Schools
  • Out-of-School Activities
  • Pygmalion Effect
  • Quantitative Literacy
  • Reading Comprehension
  • Scaffolding
  • School Leadership
  • Self-Directed Learning
  • Social Learning
  • Socio-Emotional Learning
  • Waldorf Schools
  • g Theory of Intelligence
  • Ability–Achievement Discrepancy
  • Applied Behavior Analysis
  • Attribution Theory
  • Behaviorism
  • Cattell–Horn–Carroll Theory of Intelligence
  • Classical Conditioning
  • Classical Test Theory
  • Cognitive Neuroscience
  • Educational Psychology
  • Educational Research, History of
  • Emotional Intelligence
  • Epistemologies, Teacher and Student
  • Experimental Phonetics
  • Feedback Intervention Theory
  • Generalizability Theory
  • Improvement Science Research
  • Information Processing Theory
  • Instructional Theory
  • Multiple Intelligences, Theory of
  • Operant Conditioning
  • Paradigm Shift
  • Phenomenology
  • Postpositivism
  • Pragmatic Paradigm
  • Premack Principle
  • Reinforcement
  • Response to Intervention
  • School-Wide Positive Behavior Support
  • Social Cognitive Theory
  • Speech-Language Pathology
  • Terman Study of the Gifted
  • Transformative Paradigm
  • Triarchic Theory of Intelligence
  • Universal Design in Education
  • Wicked Problems
  • Zone of Proximal Development
  • Hawthorne Effect
  • Instrumentation
  • John Henry Effect
  • Nonresponse Bias
  • Observer Effect
  • Placebo Effect
  • Reactive Arrangements
  • Regression Toward the Mean
  • Restriction of Range
  • Selection Bias
  • Threats to Research Validity
  • Treatment Integrity

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

The doctoral program at UNC Charlotte prepares professionals who seek advanced research, statistical, and evaluation skills for positions in a wide variety of educational institutions including higher education, K-12 school districts, for-profit companies, nonprofit agencies, community colleges, think tanks, government organizations, and other institutions concerned with solving problems in education.

The program builds on the Master of Education (M.Ed.) or comparable program.  The 60-credit Ph.D. program includes 9 credit hours in foundations, 21 credit hours in research methodology and data analyses, 6 credit hours in internship, 6 credit hours of an individually designed specialty, and 9 credit hours in dissertation design and study.  Additional coursework may be required for students who do not have a foundation in research.

Transfer Credit

The program accepts up to two courses as transfer from a regionally accredited doctoral granting institution, providing the Education Research Doctoral Committee determines that the course or courses to be transferred are equivalent to similar courses required in the UNC Charlotte Ph.D. program or fit the specialty area.  The grade in these transfer courses must be an A or B.  All of the dissertation work must be completed at UNC Charlotte.

Students are admitted for either full-time study or intensive part-time study and begin in the fall or spring semester.  Students must complete their degree, including the dissertation, within 8 years.  The minimum time for completion for a full-time student is 3 years. 

Additional Admission Requirements

Applications for admission are accepted twice a year to begin doctoral studies in the Fall or Spring semester.

The following documents/activities must be submitted in support of the application:

  • Official transcript(s) of all academic work attempted since high school indicating a GPA of 3.5 (on a scale of 4.0) in a graduate degree program*
  • Official report of score on the GRE or MAT that is no more than 5 years old*
  • At least three references* of someone who knows the applicant’s current work and/or academic achievements in previous degree work
  • A two page essay describing prior educational and research experiences and objectives for pursuing doctoral studies*
  • A current resume or vita
  • A professional writing sample (e.g., published article, manuscript submitted for publication, term paper submitted in prior coursework, abstract of thesis, teaching manual)
  • International students must submit official and acceptable English language proficiency test scores on the Test of English as a Foreign Language (TOEFL), the Michigan English Language Assessment Battery (MELAB), or the International English Language Testing System (IELTS). All tests must have been taken within the past two years**

*These items are required of applicants to any of UNC Charlotte’ s doctoral programs.

**See the Graduate School’s website for minimum acceptable scores.

Degree Requirements

Core courses (9 credit hours).

  • ADMN 8695 - Advanced Seminar in Teaching and Learning (3)
  • EDCI 8180 - Critical Issues and Perspectives in Urban Education (3)
  • RSCH 8210 - Applied Research Methods (3)

Research Methods and Advanced Content (21 credit hours)

  • RSCH 8110 - Descriptive and Inferential Statistics (3)
  • RSCH 8111 - Qualitative Research Methods (3)
  • RSCH 8120 - Advanced Statistics (3)
  • RSCH 8121 - Qualitative Data Collection and Analysis (3)
  • RSCH 8140 - Multivariate Statistics (3)
  • RSCH 8196 - Program Evaluation Methods (3)
  • RSCH 8220 - Advanced Measurement (3)

Research Specialization (9 credit hours)

Select three of the following:

  • RSCH 8112 - Survey Research Methods (3)
  • RSCH 8113 - Single-Case Research (3)
  • RSCH 8130 - Presentation and Computer Analysis of Data (3)
  • RSCH 8150 - Structural Equation Modeling (3)
  • RSCH 8230 - Classical and Modern Test Theory (3)
  • RSCH 8890 - Special Topics in Research (3) (Hierarchical Linear Modeling)

8000 level research courses from other doctoral programs across the University may be considered.

Secondary Area of Concentration (6 credit hours)

Students are required to complete a secondary concentration of their choice, with the approval of their doctoral advisor/committee.  Areas may include elective courses from: (a) educational leadership; (b) curriculum and instruction; (c) statistics; (d) counseling; (e) early childhood; (f) special education; (g) instructional systems technology; and (h) higher education.

Internship (6 credit hours)

  • RSCH 8410 - Internship in Educational Research (3)
  • RSCH 8411 - Internship in Teaching Educational Research (3)

Proposal Design (3 credit hours)

  • RSCH 8699 - Dissertation Proposal Design (3)

Dissertation (minimum 6 credit hours)

  • RSCH 8999 - Dissertation (1 to 9)

Additional Degree Requirements

In addition to coursework and the dissertation, students must complete a portfolio of achievements related to the three focus areas of research, collaboration, and teaching.  This portfolio must receive satisfactory ratings from the Portfolio Review Committee at two critical junctures known as Benchmark One and Benchmark Two.  The first benchmark serves as a Qualifying Examination and includes demonstration of writing, collaboration, and research skills.  The second benchmark is comparable to the comprehensive exams required by some Ph.D. programs.  Students receive opportunities to build this portfolio through the Research and Practice coursework.  Examples of possible products in the portfolio include:  research based paper, journal article review, conference presentation, evaluation project, team study, and research report.

Dissertation Requirements

The purpose of the dissertation is for doctoral students to demonstrate their ability to synthesize the professional literature and generate new knowledge for the profession through using well-established research tools.  For the Ph.D. in Education Research, Measurement, and Evaluation, the dissertation may be quantitative, qualitative, or mixed methods.  Whatever type of design, it must adhere to current standards for quality as reflected in professional writing on the chosen method of research design and reflected in the current literature. Students must be continuously enrolled for dissertation research credits through and including the semester of graduation.  Defense of the dissertation is conducted in a final oral examination that is open to the University community.

Application for Degree

Students must submit an Application for Degree during the semester in which they successfully defend their dissertation proposal.  Adherence to Graduate School deadlines is expected.  Degree requirements are completed when students successfully defend their dissertation and file the final copy of the dissertation in the Graduate School.

Book cover

Assessing Contexts of Learning pp 469–488 Cite as

Assessment and Evaluation in Educational Contexts

  • Sonja Bayer 7 ,
  • Eckhard Klieme 7 &
  • Nina Jude 7  
  • First Online: 06 December 2016

1760 Accesses

4 Citations

Part of the book series: Methodology of Educational Measurement and Assessment ((MEMA))

For at least the past three decades, assessment, evaluation, and accountability have been major strands of educational policy and practice internationally. However, the available data on how exactly assessment- and evaluation-based policies are framed and implemented, or how they shape practices within schools, are still limited. This chapter addresses these issues with a broad focus that takes into account several perspectives on school evaluation and student assessment, together with everyday practices of teacher judgment and grading. First, we address assessment and evaluation practices for the purpose of educational system monitoring. Second, school evaluation practices, as well as the use of assessment and evaluation results at the school level, are discussed. A third perspective focuses on practices of teacher evaluation. Finally, practices of student assessment within schools and classrooms are examined. The instruments described and recommended in this chapter have implications for international research, as well as national studies.

  • School evaluation
  • Teacher evaluation
  • Student assessment
  • Large-scale assessment

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

This chapter expands on a technical paper that was presented to the PISA 2015 Questionnaire Expert Group (QEG) in May 2012 (Doc. QEG 2012−05 Doc 08).

Abrams, L. M. (2007). Implications of high-stakes testing for the use of formative classroom assessment. In J. H. McMillan (Ed.), Formative classroom assessment: Theory into practice (pp. 79–98). New York/London: Teacher College, Columbia University.

Google Scholar  

Ajzen, I. (2005). Attitudes, personality, and behavior (2nd ed.). Maidenhead/New York: Open University Press.

Alkin, M. (1972). Evaluation theory development. In C. Weiss (Ed.), Evaluation action programs (pp. 105–117). Boston: Allyn and Bacon.

Alkin, M., & Christie, C. A. (2004). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots tracing theorists’ views and influences (pp. 12–65). Thousand Oaks: Sage.

Altrichter, H., & Maag Merki, K. (2016). Handbuch Neue Steuerung im Schulsystem (2nd ed.). Wiesbaden: Springer VS.

Book   Google Scholar  

Archer, J., & McCarthy, B. (1988). Personal biases in student assessment. Educational Research, 30 (2), 142–145.

Article   Google Scholar  

Barber, M., & Mourshed, M. (2007). How the world’s best-performing school systems come out on top . New York: McKinsey and Co.

Bennett, R. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18 (1), 5–25.

Berkemeyer, N., & Müller, S. (2010). Schulinterne evaluation: Nur ein Instrument zur Selbststeuerung von Schulen? [Internal school-based evaluation: Only a tool for self-management?]. In H. Altrichter & K. Maag Merki (Eds.), Handbuch Neue Steuerung im Schulsystem (1st ed., pp. 195–218). Wiesbaden: Springer VS.

Chapter   Google Scholar  

Bischof, L. M., Hochweber, J., Hartig, J., & Klieme, E. (2013). Schulentwicklung im Verlauf eines Jahrzehnts: Erste Ergebnisse des PISA-Schulpanels [School improvement throughout one decade: First results of the PISA school panel study]. Zeitschrift für Pädagogik, special issue, 59 , 172–199.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5 (1), 7–74.

Black, P., & Wiliam, D. (2004). The formative purpose. Assessment must first promote learning. In M. Wilson (Ed.), Towards coherence between classroom assessment and accountability: 103rd yearbook of the national society for the study of education, Part II (pp. 20–50). Chicago: University of Chicago Press.

Blöchliger, H. (2013). Decentralisation and economic growth—part 1: How fiscal federalism affects long-term development (OECD working papers on fiscal federalism, No. 14). Paris: OECD Publishing.

Brookhart, S. M. (2004). Classroom assessment: Tensions and intersections in theory and practice. Teachers College Record, 106 (3), 429–458.

Brown, G. T. L. (2012). Prospective teachers’ conceptions of assessment: A cross-cultural comparison. The Spanish Journal of Psychology, 15 (1), 75–89.

Coburn, C., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Practice, 9 (4), 173–206.

Colby, S. A., Bradshaw, L. K., & Joyner, R. L. (2002). Teacher evaluation: A review of literature . Paper presented at the annual meeting of the American Educational Research Association. New Orleans, LA.

Creemers, B. P. M., & Kyriakides, L. (2008). The dynamics of educational effectiveness. A contribution to policy, practice and theory in contemporary schools . London/New York: Routledge.

Cross, L. H., & Frary, R. B. (1999). Hodgepodge grading: Endorsed by students and teachers alike. Applied Measurement in Education, 12 (1), 53–72.

de Boer, H., Enders, J., & Schimank, U. (2007). On the way towards new public management? The governance of university systems in England, the Netherlands, Austria and Germany. In D. Jansen (Ed.), New forms of governance in research organizations (pp. 137–152). Dordrecht: Springer.

DeLuca, C., LaPointe-McEwan, D., & Luhanga, U. (2015). Teacher assessment literacy: a review of international standards and measures. Educational Assessment, Evaluation and Accountability, 28 , 1–22. doi: 10.1007/s11092-015-9233-6 .

Donaldson, S. I. (2004). Using professional evaluation to improve the effectiveness of nonprofit organizations. In R. E. Riggo & S. S. Orr (Eds.), Improving leadership in nonprofit organizations (pp. 234–251). San Francisco: Wiley.

Elacqua, G. (2016). Building more effective education systems. In S. Kuger, E. Klieme, N. Jude, & D. Kaplan (Eds.), Assessing contexts of learning: An international perspective . Dordrecht: Springer.

European Commission. (2011). Progress towards the common European objectives in education and training: Indicators and benchmarks 2010/2011 (Commission staff working document based on document SEC(2011)526)). Luxembourg: European Union.

Faubert, V. (2009). School evaluation: Current practices in OECD countries and a literature review (OECD Education Working Papers, No. 42). Paris: OECD Publishing.

Glazermann, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D., & Whitehurst, G. J. (2011). Passing muster: Evaluating teacher evaluation systems . Washington, DC: The Brookings Brown Center Task Group on Teacher Quality.

Goe, L. (2007). The link between teacher quality and student outcomes: A research synthesis. Washington, DC: National Comprehensive Center for Teacher Quality. http://www.gtlcenter.org/sites/default/files/docs/LinkBetweenTQandStudentOutcomes.pdf . Accessed 17 June 2016.

Goldhaber, D. D., Goldschmidt, P., & Tseng, F. (2013). Teacher value-added at the high-school level. Different models, different answers? Educational Evaluation and Policy Analysis, 35 (2), 220–236.

Guskey, T. R. (2007). Multiple sources of evidence. An analysis of stakeholders’ perceptions of various indicators of student learning. Educational Measurement: Issues and Practice, 26 (1), 19–27.

Guskey, T. R. (2012). Defining students’ achievement. In J. Hattie & E. M. Anderman (Eds.), International guide to student achievement. Educational psychology handbook series (pp. 3–6). New York/London: Routledge.

Haertel, E. H. (2013). Reliability and validity of inferences about teachers based on student test scores. Princeton: Education Testing Service. https://www.ets.org/Media/Research/pdf/PICANG14.pdf . Accessed 17 June 2016.

Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26 (1), 5–28.

Hanushek, E. A., Link, S., & Wößmann, L. (2013). Does school autonomy make sense everywhere? Panel estimates from PISA. Journal of Development Economics, 104 , 212–232.

Haptonstall, K. G. (2010). An analysis of the correlation between standards-based, non-standards-based grading systems and achievement as measured by the Colorado Student Assessment Program (CSAP) (Doctoral dissertation). Colorado: ProQuest, UMI Dissertation Publishing.

Harlen, W. (2007). Formative classroom assessment in science and mathematics. In J. H. McMillan (Ed.), Formative classroom assessment: Theory into practice (pp. 116–135). New York/London: Teachers College Press, Columbia University.

Harlen, W., & Deakin Crick, R. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning (EPPI-Centre Review, version 1.1*). London: EPPI-Centre. https://eppi.ioe.ac.uk/cms/Portals/0/PDF%20reviews%20and%20summaries/ass_rv1.pdf?ver=2006-02-24-112939-763 . Accessed 17 June 2016.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112.

Hochweber, J., Hosenfeld, I., & Klieme, E. (2014). Classroom composition, classroom management, and the relationship between student attributes and grades. Journal of Educational Psychology, 106 (1), 289–300.

Hofman, R. H., Dijkstra, N. J., & Hofman, W. H. A. (2009). School self-evaluation and student achievement. School Effectiveness and School Improvement, 20 (1), 47–68.

Huber, S. G., & Skedsmo, G. (2016). Editorial: Data use—a key to improve teaching and learning. Educational Assessment, Evaluation and Accountability, 28 (1), 1–3.

Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30 (3), 377–410.

Jude, N. (2016). The assessment of learning contexts in PISA. In S. Kuger, E. Klieme, N. Jude, & D. Kaplan (Eds.), Assessing contexts of learning: An international perspective . Dordrecht: Springer.

Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: combining high-quality observations with student surveys and achievement gains (Research paper, MET Project). Seattle: Bill & Melinda Gates Foundation. http://files.eric.ed.gov/fulltext/ED540960.pdf . Accessed 17 June 2016.

Kane, T. J., McCaffrey, D. F., Miller, T., & Staiger, D. O. (2013). Have we identified effective teachers? Validating measures of effective teaching using random assignment (Research paper, MET Project). Seattle: Bill & Melinda Gates Foundation. http://www.hec.ca/iea/seminaires/140401_staiger_douglas.pdf . Accessed 17 June 2016.

Kellaghan, T., & Stufflebeam, D. L. (Eds.). (2003). International handbook of educational evaluation. Part one: Perspectives/part two: Practice . Dordrecht: Kluwer Academic Publishers.

Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30 (4), 28–37.

Klieme, E. (2013). The role of large-scale assessment in research on educational effectiveness and school development. In M. von Davier, E. Gonzalez, E. Kirsch, & K. Yamamoto (Eds.), The role of international large-scale assessments: Perspectives from technology, economy, and educational research (pp. 115–147). New York: Springer.

Koeppen, K., Hartig, J., Klieme, E., & Leutner, D. (2008). Current issues in competence modeling and assessment. Zeitschrift für Psychologie/Journal of Psychology, 216 (2), 61–73.

Kuger, S., & Klieme, E. (2016). Dimensions of context assessment. In S. Kuger, E. Klieme, N. Jude, & D. Kaplan (Eds.), Assessing contexts of learning: An international perspective . Dordrecht: Springer.

McMillan, J. H. (2007). Formative classroom assessment: The key to improving student achievement. In J. H. McMillan (Ed.), Formative classroom assessment. Theory into practice (pp. 1–7). New York/London: Teacher College, Columbia University.

Mead, S., Rotherham, A., & Brown, R. (2012). The hangover: Thinking about the unintended consequences of the nation’s teacher evaluation binge. Teacher Quality 2.0, Special Report 2 . Washington, DC: American Enterprise Institute. http://bellwethereducation.org/sites/default/files/legacy/2012/09/Teacher-Quality-Mead-Rotherham-Brown.pdf . Accessed 17 June 2016.

Nevo, D. (1998). Dialogue evaluation: A possible contribution of evaluation to school improvement. Prospects, 28 (1), 77–89.

Nevo, D. (2002). Dialogue evaluation: Combining internal and external evaluation. In D. Nevo (Ed.), School-based evaluation: An international perspective (pp. 3–16). Amsterdam/Oxford: Elsevier Science.

OECD. (1989). Schools and quality: An international report . Paris: OECD.

OECD. (2005). Formative assessment: Improving learning in secondary classrooms . Paris: OECD.

OECD. (2007). PISA 2006: Science competencies for tomorrow’s world (Vol. 1). Paris: OECD.

OECD. (2010). PISA 2009 results: What students know and can do . Paris: OECD.

OECD. (2012). Grade expectations: How marks and education policies shape students’ ambitions . PISA. Paris: OECD.

OECD. (2013). Synergies for better learning. An international perspective on evaluation and assessment. OECD reviews of evaluation and assessment in education . Paris: OECD.

OECD. (2014). TALIS 2013 results: An international perspective on teaching and learning (Revised version). TALIS.

Papanastasiou, E. C. (1999). Teacher evaluation: Theories and practices . ERIC. http://files.eric.ed.gov/fulltext/ED439157.pdf . Accessed 17 June 2016.

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks: Sage.

Rakoczy, K., Klieme, E., Bürgermeister, A., & Harks, B. (2008). The interplay between student evaluation and instruction. Zeitschrift für Psychologie, 2 , 111–124.

Ryan, K. E., Chandler, M., & Samuels, M. (2007). What should school-based evaluation look like? Studies in Educational Evaluation, 33 (3–4), 197–212.

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18 , 119–144.

Sanders, J. R., & Davidson, E. J. (2003). A model for school evaluation. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation. Part one: Perspectives/part two: Practice (pp. 807–826). Dordrecht: Kluwer Academic Publishers.

Santiago, P., & Benavides, F. (2009). Teacher evaluation: A conceptual framework and examples of country practices . Paris: OECD.

Scheerens, J. (2002). School self-evaluation: Origins, definitions, approaches, methods and implementation. In D. Nevo (Ed.), School-based evaluation: An international perspective (pp. 35–69). Amsterdam/Oxford: Elsevier Science.

Scheerens, J., & Bosker, R. (1997). The foundations of educational effectiveness . Oxford: Emerald.

Scheerens, J., Glas, C. A., & Thomas, S. M. (2003). Educational evaluation, assessment, and monitoring. A systemic approach . Lisse/Exton: Swets & Zeitlinger.

Shepard, L. A. (2006). Classroom assessment. In R. L. Brennan (Ed.), Educational measurement (pp. 623–646). Westport: Rowman and Littlefield Publishers.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189.

Simons, H. (2002). School self-evaluation in a democracy. In D. Nevo (Ed.), School-based evaluation: An international perspective (pp. 17–34). Amsterdam/Oxford: Elsevier Science.

Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118 (2), 113–141.

Stufflebeam, D. L. (2003). The CIPP model for evaluation. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation. Part one: Perspectives/part two: Practice (pp. 31–62). Dordrecht: Kluwer Academic Publishers.

Taylor, E. S., & Tyler, J. (2011). The effect of evaluation on performance: Evidence from longitudinal student achievement data of mid-career teachers . NBER Working Paper 16877. Cambridge, MA.

Teltemann, J., & Klieme, E. (in press). The impact of international testing projects on policy and practice. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of human and social conditions in assessment (pp. 369–386). New York: Routledge.

Torrance, H. (Ed.). (2013). Educational assessment and evaluation: Major themes in education . New York: Routledge.

Van de Vijver, F. J. (1998). Towards a theory of bias and equivalence. Zuma Nachrichten Spezial, 3 , 41–65.

Visscher, A. J., & Coe, R. (2003). School performance feedback systems: Conceptualisation, analysis, and reflection. School Effectiveness and School Improvement, 14 (3), 321–349.

Whitcomb, J. (2014). Review of “Fixing classroom observations” . Boulder: National Education Policy Center. http://nepc.colorado.edu/thinktank/review-fixing-classroom-observations . Accessed 17 June 2016.

Wößmann, L. (2003). Schooling resources, educational institutions, and student performance: The international evidence. Oxford Bulletin of Economics and Statistics, 65 (2), 117–170.

Wößmann, L., Lüdemann, E., Schütz, G., & West, M. R. (2009). School accountability, autonomy and choice around the world . Cheltenham: Edward Elgar.

Wyatt-Smith, C. (2014). Designing assessment for quality learning: The enabling power of assessment . Heidelberg: Springer.

Download references

Author information

Authors and affiliations.

Department for Educational Quality and Evaluation, German Institute for International Educational Research (DIPF), Frankfurt, Germany

Sonja Bayer, Eckhard Klieme & Nina Jude

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sonja Bayer .

Editor information

Editors and affiliations.

Department for Educational Quality and Evaluation, German Institute for International Educational Research (DIPF) , Schloßstraße 29 60486, Frankfurt, Hessen, Germany

Susanne Kuger

Department for Educational Quality and Evaluation Schloßstraße 29 60486, German Institute for International Educational Research (DIPF), Schloßstraße 29 60486, Frankfurt, Hessen, Germany

Eckhard Klieme

Department for Educational Quality and Evaluation, German Institute for International Educational Research (DIPF), Schloßstraße 29 60486, Frankfurt, Hessen, Germany

Department of Educational Psychology, University of Wisconsin-Madison , 1025 West Johnson Street Madison, Wisconsin, USA

David Kaplan

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter.

Bayer, S., Klieme, E., Jude, N. (2016). Assessment and Evaluation in Educational Contexts. In: Kuger, S., Klieme, E., Jude, N., Kaplan, D. (eds) Assessing Contexts of Learning. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-319-45357-6_19

Download citation

DOI : https://doi.org/10.1007/978-3-319-45357-6_19

Published : 06 December 2016

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-45356-9

Online ISBN : 978-3-319-45357-6

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Browse Works

Educational Measurement and Evaluation

Research papers/topics in educational measurement and evaluation, quality and productivity of teachers in selected public primary schools in apac district uganda.

ABSTRACT The purpose of this study was to determine the relationship between teacher quality and Teacher performance in the selected Primary schools of Apach District Uganda. The study was guided by the following objectives; i) finding out the profile of the respondents, ii) level of teacher quality, iii) level of teacher productivity and finding out the relationship between level of teacher quality and level of teacher productivity. Descriptive correlational survey research design was employ...

An Analysis of Teachers’ Perception On the Relationship Between Reward, Compensation and Performance of Secondary School Teachers in Bugiri District, Uganda

TABLE OF CONTENTS Title Declaration Supervisors’ ApprovaI Dedication Acknowledgement  List of Tables List of Figures iii List of Acronyms Abstract CIJAPTFR ONE: INTRODUCTION Overview 1 I Background of the Study 1.2 Statement of the Problem I .3 Purpose of the Study 1.4 Objectives of the Study 1.5 Scope of the Study 5 1.6 Significance of the Study  CHAPTER TWO: REVIEW OF RELATED LITERATURE 2.0 Overview 2. 1 Theoretical and Conceptual Framework  8 2.2 The impact of Extrinsic and Intrinsic ...

Social Factors and Students’ Discipline in Public Secondary Schools in Serere County, Serere District, Uganda

ABSTRACT This study investigated the influence of ~ocial factors on students’ discipline in selected secondary schools in Serere County, Serere District. To achieve the purpose of the study, three research objectives were set and these included: to ascertain the influence of social factors in secondary schools; to assess students discipline in secondary schools; to establish the relationship between the influence of social factors on students discipline in secondary school~. The design used...

Principals Decision Making Strategies and Secondary School Teacher’s Effectiveness in Ijumu Local Government Area Kogi State

ABSTRACT           The leadership roles of the principal as the chief executive in the school is enormous. He plays a key role in the success and failure of the organisations. They study therefore investigated principal decision making strategies on teacher effectiveness in secondary schools specifically, the objectives of this study were to determined (i) examine who decision are made by the principals the secondary school level; (ii) asses t at decision making strategies on teache...

Socio-Economic Status of Parents and Students’ Academic Performance in Lagelu Local Government Area, Oyo State

TABLE OF CONTENT Title Page                                                                                                                    i Certification                                                                                                                 i...

Teachers Perceptions Of The Performance Appraisal System Effectiveness In Public Secondary Schools In Naivasha And Gilgil Districts, Nakuru County

ABSTRACT Effective performance appraisal system depends on how it addresses itself to the views and attitudes of the teachers in the school. Since 2012, teachers in Kenya have been appraised using a revised system of appraisal whose effectiveness has not been verified. The purpose of this study was to explore the teachers‟ perceptions on the effectiveness of the appraisal system. The specific objectives of the study were to: establish teachers‟ perceptions about the effectiveness of the ...

Youth’s Credit Accessibility And Employment Creation In Tanzania: A Case Of Kilombero District, Morogoro

ABSTRACT This study was conducted to assess youth’s credit accessibility and employment creation in Kilombero District. Specifically, the study aimed at, identifying the proportion of youth with access to credit, identifying the challenges faced by the youth in their endeavor to acquire credit and determining the contribution of youth’s access to credit towards employment creation. The study adopted a cross -sectional design whereby data were collected once from 181 randomly selected you...

Assessment Of Household Water Services Accessibility In Karatu District Tanzania: A Case Of World Vision Dream Village Wash Project

ABSTRACT The study aimed at assessing household water service accessibility in Karatu District Tanzania, a case of world vision dream village WASH project. Specifically, the study wanted to establish the level of access to water services before and after the dream village WASH project, to assess community involvement in dream village WASH project, and to determine household water users’ opinions on dream village WASH project. The study adopted both quantitative and qualitative research appr...

Health Literacy, HIV/AIDs, And Gender: A Ugandan Youth Lens

Abstract Youth, the World Bank argues, need to become a constituency for reform in developing countries. This case study responds to this challenge by investigating adolescent students' understanding of the relationship between health literacy, HIV/AIDS, and gender in the context of Uganda. The four questions investigated are: (i) What kind of health literacy, HIV/AIDS, and gender-related information is accessible to Ugandan adolescent secondary school students? (ii) In the students' view, w...

DIFFERENTIAL ITEM FUNCTIONING OF WEST AFRICAN SENIOR SCHOOL CERTIFICATE EXAMINATION IN CORE SUBJECTS IN SOUTHERN GHANA

ABSTRACT The research aimed at examining whether the 2012-2016 May/June West African Senior Secondary Certificate Examination (WASSCE) in core subjects exhibited gender and location differential item functioning (DIF) in Ghana using the cross-sectional design. Six research hypotheses and one research question were formulated for the study. A sample of 36,035 candidates consisting of 8,994(English Language), 8,935(Mathematics), 9,089(Integrated Science) and 9,017 (Social Studies) candidates wa...

ASSESSMENT OF THE QUALITY OF ANTENATAL CARE IN MANAGING ANAEMIA BASED ON GUIDELINES IN ASHAIMAN MUNICIPAL AND NINGO PRAMPRAM IN GREATER ACCRA

ABSTRACT Background: The development of maternal health services does not warrant their use by women. Nor do the use of maternal health services warrant ideal results for women. A unique element of health care components which is spotlight to explain why women do either not access facilities at all or access the facilities late and experience an average unfavourable results, in spite of timely presentation, concerns the intangible notion of care. Quality of care based on the definition of Ins...

THE IMPLEMENTATION OF SCHOOL-BASED ASSESSMENT IN KEEA DISTRICT IN CENTRAL REGION OF GHANA

ABSTRACT The study examined the implementation of School Based Assessment in KEEA district in Central Region. The quota sampling procedure was used to select 200 basic school teachers in ten basic schools for the study. Five research questions and three hypotheses guided the study. Questionnaire was used for data collection for the study. The Cronbach’s coefficient alpha for the questionnaire was 0.85. The results showed that to a great extent, teachers agreed that, they have knowledge abou...

THE EFFECT OF MODERATION OF CONTINUOUS ASSESSMENT SCORES ON THE PERFORMANCE SCORES OF CANDIDATES AT THE BASIC EDUCATION CERTIFICATE EXAMINATION LEVEL

ABSTRACT The purpose of the study was to determine the effect of, moderation of continuous assessment scores on the overaIl performance scores of candidates at the Basic Education Certificate Examination (BECE). The descriptive research design was adopted for the study. The target population was the 278,413 candidates who registered for the 2004 BECE in all subjects. Schools were stratified into high, average and low performance categories depending on the performance of their candidates at t...

Fair Testing Practices in District Mandated Testing Programme in the Ashanti Region of Ghana

ABSTRACT The study‟s main purpose was to investigate the fair testing practices of a test developer and teachers in a district-mandated testing programme in the Ashanti Region of Ghana. The research design adopted for this study was the multilevel mixed methods triangulation design. Critical case sampling technique was utilized in selecting 3 key informants and 9 test questions. Also, the two-stage cluster sampling technique was adopted in selecting 251 teachers from 162 public JHSs. The ma...

MULTIPLE-CHOICE CONSTRUCTION COMPETENCIES AND ITEMS‟ QUALITY: EVIDENCE FROM SELECTED SENIOR HIGH SCHOOL SUBJECT TEACHERS IN KWAHU-SOUTH DISTRICT

ABSTRACT The purpose of the study was to explore the relationship between the multiplechoice test construction competencies of senior high school teachers in the Kwahu-South District and the quality of multiple-choice test items they construct, as well as the effect of years of teaching on such relationship. To examine the relationship among the variables under investigation, the quantitative approach is employed using correlational research design. The total number of teachers that constitut...

Thesis and project topics in department of Educational Measurement and Evaluation

Popular Papers/Topics

Criterion-referenced measurement for educational evaluation and selection, issues of examination malpractice among secondary school students in ebonyi local government area of ebonyi state, problems confronting continuous assessment practices in nigeria, effects of peer-assessment on students’ academic achievement in computer science in secondary schools in ezza south local government area of ebonyi state., schools as moderators of neighborhood influences on adolescent academic achievement and risk of obesity: a cross-classified multilevel investigation, emotional intelligence and attitude as predictors of achievement in mathematics, item analysis on some selected mathematics achievement test, knowledge management plan for pension scheme, undergraduate students’ views and attitudes towards the use of computer based test for examination in the university of ilorin., reducing unemployment in nigeria: an evaluation of the entrepreneurship programme in ondo state, accountability in education, knowledge of formative assessment practices of shs mathematics teachers in the cape coast metropolis, completely randomised design, planning the evaluation process, gender difference in formative assessment knowledge of shs teachers in the upper west region of ghana.

Privacy Policy | Refund Policy | Terms | Copyright | © 2024, Afribary Limited. All rights reserved.

  • Skip to Content
  • Catalog Home
  • Institution Home
  • School of Architecture
  • College of Arts & Sciences
  • School of Business Administration
  • School of Communication
  • School of Education & Human Development
  • College of Engineering
  • School of Law
  • Rosenstiel School of Marine & Atmospheric Science
  • Miller School of Medicine
  • Frost School of Music
  • School of Nursing & Health Studies
  • The Graduate School
  • Division of Continuing & International Education
  • Search Miami.edu Search
  • People Search
  • Department Search
  • Course Search
  • Student Life

Registrar's Office

  • Graduate Academic Programs >
  • Education and Human Development >
  • Educational and Psychological Studies >
  • Ph.D. in Research, Measurement, and Evaluation
  • General University Information
  • Undergraduate Academic Programs
  • Architecture
  • Arts and Sciences
  • Communication
  • Certificate in Data Management and Statistical Analysis (Online)
  • Certificate in Higher Education Administration
  • Certificate in Latino Mental Health Counseling
  • Certificate in Program Evaluation (Online)
  • Executive Ed.D. in Higher Education Leadership
  • J.D./​M.S.Ed. in Community and Social Change
  • M.S. in Data Analytics and Program Evaluation
  • M.S.Ed. in Community and Social Change
  • M.S.Ed. in Counseling
  • M.S.Ed. in Higher Education Administration
  • Ph.D. in Community Well-​Being
  • Ph.D. in Counseling Psychology
  • Kinesiology and Sport Sciences
  • Teaching and Learning
  • Engineering
  • Interdisciplinary
  • Marine, Atmospheric, and Earth Science
  • Nursing and Health Studies
  • Law Academic Programs
  • Graduate Student Handbook for UOnline Students
  • Special Programs
  • Program Index
  • Course Listing
  • Previous Bulletin Archives

https://eps.edu.miami.edu/graduate/doctoral/rme-phd/index.html

The curriculum of the Ph.D. in RME is structured around six components: (A) a core set of 36 credits (12 courses of 3 credits each) of required coursework covering the fundamentals of research design, measurement, and statistical analysis; (B) 6 credits of a research apprenticeship, in which students conduct mentored research under the supervision of RME faculty members; (C) 6 credits of field experience in educational research, in which students play active roles in the design and analysis of an applied research or evaluation projects; (D) the doctoral qualifying exam; (E) 12 credits of the doctoral dissertation and (F) 6 credits of electives (12 credits of electives for students who do not hold a master degree).

Application Requirements

Admission to all graduate-degree concentrations in the School of Education and Human Development is based on the recommendation of the faculty. Admissions decisions are based on faculty review of the following general requirements that apply to  all  Graduate Programs in the School as well as specific documents listed under each concentration.

Applicants must:

  • achieve acceptable scores on the Graduate Record Exam (GRE) taken within the past five years. International applicants whose native language is not English or applicants whose degrees are from a non-U.S. University must pass the Test of English as a Foreign Language (TOEFL) or International English Language Testing Systems (IELTS) and the GRE;
  • provide official transcripts showing completion of a bachelor’s degree from an accredited institution and an acceptable undergraduate grade point average. A minimum of 3.0 undergraduate GPA is required. Official transcripts from every institution attended by an applicant, whether or not the applicant completed a degree program at the institution, are required;
  • provide three letters of recommendation that address the issues and meet the criteria established by the program being applied to;
  • provide a personal statement that addresses the mission and purpose of the program being applied to;
  • take part in an admissions interview (required by some programs); and
  • exhibit personal and professional experiences and characteristics that are relevant to the profession and/or field and/or degree program for which the application is being submitted.

Doctor of Philosophy (Ph.D.)

In addition to  the factors listed as general requirements for all applications to the SEHD’s graduate programs, consideration for admission to the Ph.D. program will include the following:

  • letters of recommendation should address the applicant’s  academic  potential;
  • available student space in the program;

International Applications

All international applications must provide additional information and meet additional requirements as required by the UM Graduate School and the Office of International Student and Scholar Services. For an appropriate link to these requirements, please visit the Graduate School website.

Admission Decision

Once an applicant has been admitted to graduate study, that individual should meet with the faculty advisor who was appointed to serve in that capacity and whose name appears in the admissions letter. This advisor will help the student enroll in courses that are appropriate to the program; to develop and to refine a Program of Study that must be on file in the Office of Graduate Studies by the end of the first academic year of enrollment.

Honor Code/Handbook of Policies and Procedures

The School of Education and Human Development follows the Graduate School’s Honor Code. All students are required to review the Graduate Student Honor Code and the School of Education and Human Development’s Handbook of Policies and Procedures for Graduate Students and submit the signed Acknowledgement of Receipt located on page 3 by the end of their first semester of enrollment.

Curriculum Requirements

EPS 799 (Advanced Individual Study) and EPS 712 (Field Experience in Educational Research) can be repeated over and above the credits fulfilling a student's apprenticeship (6 credits) and field experience (6 credits) requirements.

6 credits for students with a Master degree; 12 credits for students without a Master degree

Upon the approval of your academic advisor, you can take the classes from other departments. 

Additional courses may be substituted upon approval from a student’s academic advisor. These options include a variety of graduate courses in the fields of computer science, psychology, education, and other areas of interest.

This is a sample Plan of Study. Your actual course sequence may vary depending on your previous academic experience as well as current course offerings. Students should meet with their academic advisor each semester to determine the appropriate course selection.

Sample Plan of Study

The mission of the RME doctoral program is to provide students with the requisite training in the application of statistical and measurement methodologies to conduct original research in the fields of educational research and measurement methodology, and to serve as experts in the areas of research design, data analysis, and measurement.

Student Learning Outcomes

  • Students will demonstrate mastery of the computer programing skills that is required for conducting research project using computer simulation in R.
  • Students will conduct original research in the field of statistical and measurement methodology.
  • Students will demonstrate the ability to provide methodological consultation.

University of Miami

Office of the University Registrar

  • 1306 Stanford Drive
  • The University Center Room 1230
  • Coral Gables, FL 33146
  • [email protected]
  • Parking & Transportation

Copyright 2023-2024 University of Miami. All Right Reserved. Emergency Information Privacy Statement & Legal Notices

Print Options

Print this page.

The PDF will include all information unique to this page.

PDF of the entire 2022-2023 Academic Catalog.

Visit

_linkedin_partner_id = "1170812"; window._linkedin_data_partner_ids = window._linkedin_data_partner_ids || []; window._linkedin_data_partner_ids.push(_linkedin_partner_id); (function(){var s = document.getElementsByTagName("script")[0]; var b = document.createElement("script"); b.type = "text/javascript";b.async = true; b.src = "https://snap.licdn.com/li.lms-analytics/insight.min.js"; s.parentNode.insertBefore(b, s);})(); School of Education

Doctor of philosophy in education evaluation, measurement and statistics specialization.

The School of Education has launched a STEM-designated PhD in Educational Statistics and Research Methods . We encourage new students to explore that program. This page is for students admitted to the Evaluation, Measurement, and Statistics specialization in Fall 2019 and earlier.

Drawing on the diverse methodological expertise of faculty, the Evaluation, Measurement, and Statistics (EMS) specialization offers a comprehensive range of courses designed to prepare students to develop, critically evaluate, and properly use quantitative and mixed methodologies to advance educational research. The EMS faculty believes that fundamental contributions to educational methodology can be accomplished through study of a wide variety of areas in research methods. Students in this specialization

  • Understand recent developments in psychometric theory, as well as technical issues underlying construction and use of tests for selection, placement, and instruction.
  • Develop skills in advanced statistical modeling using a variety of software and have an opportunity to examine how these models are applied to school effectiveness, economic and social stratification, the structure of human abilities, and achievement growth.

Graduate student Dandan Chen stands next to her research poster

Program Coordinator: Dr. Teomara Rutherford

Specialization Coordinator: Dr. Zachary Collier

Specialization Courses

In addition to the Doctoral Core Courses, the following specialization courses are required of all students in the Evaluation, Measurement and Statistics specialization:

  • EDUC 863: Principles of Program Evaluation
  • EDUC 873: Multilevel Models in Education
  • EDUC 826: Mixed Methods in Social Science Research

Note: Students must take one additional methodology course as an elective; this elective must be approved as part of the student’s Individual Program Plan.

Sample Course Schedules

Sample EMS course schedules for students who enter the Ph.D. program in the following semesters are available through the links below.

  • Students entering in the Fall of odd years
  • Students entering in the Fall of even years

Specialization Requirements

All Ph.D. students in EMS must complete the following additional requirements.

Methods Core Courses

Students in EMS must chose the following three quantitative methodology courses to satisfy the research Methods Core requirement.

  • EDUC 812: Regression and Structural Equation Modeling
  • EDUC 865: Educational Measurement Theory
  • EDUC 874: Applied Multivariate Data Analysis

Specialization Area Exam

The Specialization Area Exam in EMS is designed to assess a student’s proficiency in integrating various aspects of research methodology to address substantive issues in education. The exam is to be taken at the beginning of the fourth year of study and is organized as a take-home exam spanning no more than three days.

The exam consists of two parts. The first part is based on the course work in EMS, covering topics in statistical methodology, educational measurement, and evaluation. The second part is based on the dissertation topic area.

The student, with consultation from the adviser, develops a reading list for the specialization examination. The reading list needs approval by the EMS faculty. The entire area faculty is involved in developing questions and reading the exam.

Students will be notified of their results within three weeks of submitting their exam. Should a student not demonstrate satisfactory performance on the exam, the student will have one attempt to retake the exam, scheduled at the end of the semester in which the exam was taken. Failure to demonstrate satisfactory performance on the retake of the exam will result in termination from the program. This requirement may be waived in an exceptional case if the EMS faculty deems it appropriate to do so.

Recent Graduates

Currently the market for students with skills and competencies afforded by this specialization is quite strong, in comparison to the overall academic market. EMS graduates will be well prepared for careers in applied education research in several arenas in both the for-profit and non-profit sectors, such as:

  • Tenure-track or research faculty at Research-I universities
  • Research/evaluation staff at national research organizations (e.g., Abt, AIR, Mathematica, MDRC, RAND, Westat)
  • Research/evaluation staff at local research organizations (e.g., Research for Action, Branch Associates, Research for Better Schools)
  • Research/psychometric staff at national measurement organizations (e.g., College Board, CTB, ETS, Harcourt/Riverside, Pearson)
  • Research/evaluation Staff in federal agencies (e.g., Institute of Education Sciences) and regional agencies (e.g., REL Mid-Atlantic)
  • Research/evaluation Staff at local school districts and state education agencies

Student Spotlight

Kati Tilley portrait

Kati Tilley

“Through my assistantship, I have gained critical experience in communicating research findings. One of the most valuable experiences I have had was learning to write and present results to a non-academic audience. I led the development of an individualized report of survey results and presented them in person to the schools who participated in our study.”

Program Faculty

The following faculty members are committed to the EMS program:

Lauren Bailes portrait

Graduate Students

Alumni profile, akisha osei sarfo.

“I chose UD because of the faculty and mutual research interests at the time. I wanted to expand my analytic expertise but also have opportunities to apply them in educational contexts. I was able to learn both in the classroom and through internships at places such as the Delaware Department of Education and the US Census Bureau. The faculty I worked with had genuine care for my success in the classroom and in my career. My work with my advisor Dr. Elizabeth Farley-Ripple and Dr. Henry May led to a dissertation fellowship/grant with the American Education Research Association (AERA). I also came back to work as a research assistant professor with the Center for Research in Education and Social Policy (CRESP) under Dr. May.”

Akisha Osei Sarfo

Research Projects

Request More Information

Visit Campus

  • Major and Minors
  • Graduate Programs
  • Prospective Students
  • Academic Calendar
  • Social Media
  • 113 Willard Hall Education Building , Newark, DE 19716
  • Phone: (302) 831-8695
  • Fax: (302) 831-4110

Go to Charlotte.edu

Prospective Students

  • About UNC Charlotte
  • Campus Life
  • Graduate Admissions

Faculty and Staff

  • Human Resources
  • Auxiliary Services
  • Inside UNC Charlotte
  • Academic Affairs

Current Students

  • Financial Aid
  • Student Health

Alumni and Friends

  • Alumni Association
  • Advancement
  • Make a Gift

Ph.D. in Educational Research, Measurement, and Evaluation

photo of phd logo

The Ph.D. in Educational Research, Measurement, and Evaluation is designed for individuals who are interested in becoming an expert in research methodology, measurement, applied statistics, or program evaluation.

The program targets experienced educators who hold a master’s degree in a related educational field. Career opportunities may be found in a wide variety of educational institutions including higher education, K-12 school districts, for-profit companies, nonprofit agencies, community colleges, think tanks, government organizations, and other institutions concerned with solving problems in education.

Both full-time and part-time students are welcome. Full-time admission is only available in Fall semester, while part-time admission occurs year-round (Fall, Spring, Summer). Full-time students can expect to complete the program within 3-4 years and part-time students within 4-6 years. We offer most courses online and hybrid in the evening to meet student needs. A limited number of courses are offered in summer.

Graduates from our program may find job opportunities in:

  • Academia (university or college professor of educational research)
  • Institutions and organizations that focus on research and evaluation
  • Testing companies
  • Offices of assessment, research, and accountability at college/university, school district, or state levels

Program Objectives

Graduates of the ERME program will demonstrate:

  • in‐depth knowledge of educational research, measurement, and evaluation and be able to apply knowledge and skills specific to their discipline
  • ability to conduct independent research to answer relevant questions their area of specialization and add to the body of knowledge in the field of education
  • skills in reflective practice on teaching that addresses diverse learners, research using evidence-based practices, working and collaborating with diverse partners, and using leadership skills
  • skills in research consultation with external agencies including needs assessment, communication, and report writing

Program Requirements

  • Loading…

Program Director

photo of Sandra Dika

Dr. Sandra Dika

276 College of Education Building Phone: 704-687-1821 Email: [email protected]

A limited number of research assistantships with tuition support are available annually for full-time students. Contact the program director for details.

College of Education

Professor working with graduate students.

Educational Measurement and Statistics

As a student in Educational Measurement and Statistics, you will receive training in educational measurement, applied statistics, and data science in social and educational sciences. You will also learn theory, methods, professional ethics, and methodological best practices in quantitative research methodology.

Our graduates obtain employment in a wide range of industries, from ACT and Pearson to Gallup and Google in addition to positions within universities and colleges. Our alumni find career opportunities where they analyze and interpret quantitative data, independently conduct research and communicate findings to various audiences, design assessments, and use complex quantitative information.

  • MA Educational Measurement and Statistics

T raining and hands-on experience with core measurement and statistical methods.

  • PhD Educational Measurement and Statistics

Preparation for scientific and practitioner positions with the cutting-edge analytic methods used in research and industry.

One of the best things about this program is the community. I would not be the researcher that I am without the encouragement and support of both my faculty and student colleagues.  - Catherine  Mintz  (Mathers), PhD Candidate

World-renowned program

International students comprise more than 60 percent of our enrollment, representing a number of different countries and six continents, creating a dynamic and diverse learning environment.

Students come to us from across the state, nation, and globe due to our ties to the world-renowned Iowa Testing Programs as well as industry giants in our backyard, including ACT and Pearson, both spun off from the college thanks to testing pioneer E.F. Lindquist.

International students

Our programs are STEM Designated Programs for Optical Practical Training, or OPT status with the Classification of Instructional Program (CIP) code STEM-Designated, a federal government designation. This provides extended visa opportunities to work post graduation.

Come join our world-famous program that has a 100 percent placement rate. Here’s one dozen reasons why our program could be right for you!

  • A history of successful placement of graduates in professional and academic positions
  • Breadth and depth of training education measurement methodology and  applied statistical methods
  • History of research experiences leading to conference presentations and peer reviewed papers
  • Integration with testing programs
  • Successful grants
  • Community of assessment professionals in local area (ACT, Pearson, College Board)
  • Internship opportunities
  • Large number of students receive financial support
  • Many faculty affiliated with externally funded Research and Development centers
  • Work closely with faculty mentors
  • Interdisciplinary linkages (ties to other programs; informatics, statistics, business, data science)
  • Opportunities for credentials in other programs

Faculty and Research

With 10 tenured or tenure-track faculty, one distinguished visiting professor, and a number of emeritus, affiliated, and adjunct faculty, our program has one of the largest faculty cohorts and one of the best faculty to student ratios in the nation. You will easily find a faculty mentor who matches your research interests.

Areas of expertise include:

  • Test theory (Classical, Item response, generalizability, factor analysis)
  • Structural equation modeling
  • Bayesian statistics
  • Assessment Instrument Construction and Validation
  • Validity theory
  • Multilevel modeling/Mixed effects models
  • Causal inference

Center for Advanced Studies in Measurement and Assessment

The Center for Advanced Studies in Measurement and Assessment (CASMA) pursues research-based initiatives that lead to advancements in the methodology and practice of educational measurement and assessment.

Center for Evaluation and Assessment

The Center for Evaluation and Assessment (CEA) conducts multiple forms of program evaluation and assessment in collaboration with colleges, universities, and school systems throughout the United States.

Iowa Testing Programs

The world-renowned Iowa Testing Programs (ITP) serves assessment needs in Iowa and across the country by developing high-quality instruments and working with schools to promote valid use of results.

Admissions and Application

For general admissions information, please visit  Graduate Admissions . Please visit the degree pages for specific admissions information:

Financial Support

Prospective students are strongly encouraged to submit all of their application materials to the University by December 1 if you wish to be considered for all possible sources of financial aid for the fall semester of the following year. Several sources of financial aid, including research assistantships, teaching assistantships, and fellowships are available to Educational Measurement and Statistics students. Two of the research assistantships are:

Special Graduate Assistantships (SGA)

Recipients assist College of Education faculty in research and other scholarly activity in a field related to the faculty member's interests. Students from all College of Education program areas may apply. Assistantships are awarded on a competitive basis.

Appointments are 50% time for nine months and begin with the start of the academic year.

To request information, Interested students should contact David Henkhaus ( [email protected] , 319-384-2714) or can send a letter requesting information to:

Chair, SGA Selection Committee The University of Iowa 340 Lindquist Center Iowa City, Iowa 52242-1529

Applicants must be graduate students in good standing or qualify for admission as well as satisfy the College of Education requirements for assistantships.

The deadline for applications is generally in February.

Iowa Testing Programs Research Assistantships

To apply for research assistantships in the Iowa Testing Programs (ITP), send a letter of application to:

Stephen Dunbar Iowa Testing Programs 334A Lindquist Center

University of Iowa Iowa City, Iowa 52242-1529

This letter should indicate your interest in a research assistantship in ITP and describe any background, skills, or training you have that you believe might be especially relevant to Iowa Testing Programs.

The letter should also include your (a) university number (if any); (b) home address and telephone number; (c) email address; (d) university address (if any) and telephone number (if any); (e) degree program; and (f) advisor (if any).

Applications are reviewed by a committee of ITP faculty when research assistantships become available. This committee makes recommendations to the ITP Administrator, who makes final decisions. Applications are kept on file for one year from the date of the application.

Fellowships

In addition to assistantships, several fellowships are available. The University will review the application materials when they are complete and will then send the necessary forms to the applicant.

For general financial aid, contact: Office of Student Financial Aid , 208 Calvin Hall, The University of Iowa Iowa City IA 52242 319/335-1450 [email protected]

Please review the required supplemental documents above before starting the general graduate application. To begin the application process, set up an account with an existing email address and password

Program Resources

Ui and coe resources.

  • The   Blommers Measurement Resource Library   contains a collection of books, journals, and reference materials related to educational testing and assessment as well as an extensive collection of published and unpublished tests. The laboratory is staffed by a full-time professional librarian.
  • The   Center for Advanced Studies in Measurement and Assessment   pursues research-based initiatives that lead to advancements in the methodology and practice of educational measurement and assessment. Inter-disciplinary measurement and assessment activities, as well as international activities, may be pursued when they relate to the primary mission.
  • The   Center for Evaluation and Assessment   provides consulting services on campus and across the nation in the assessment of college outcomes and the evaluation of educational and social programs. The Center regularly offers graduate assistantships to qualified students.
  • The Connie Belin and Jacqueline N. Blank Center for Gifted Education and Talent Development   provides teacher training and direct services to gifted students. A variety of experiences including practica and research assistantships are available to graduate students.
  • The   Department of Linguistics   offers courses for those who need to study English as a second language.
  • Professional Development @ the TLC   provides graduate students a comprehensive job seeking resource.  From résumé and CV samples, templates and critiquing service, to programs that address all aspects of the academic and professional job search, to several publications and video programs that inform and guide every step of the job search.  They also offers students a Letter Service for professional letters of recommendation.  For more information on the services offered, stop by N140 LC or visit our website.
  • The   Education Technology Center   provides access to a variety of services for College of Education student, faculty and staff, including an open computer lab, three computer classrooms, an Iowa Communications Network (ICN) classroom, wireless laptop technology within Lindquist Center, AV equipment for checkout, digital video editing equipment, and the ETC Support Center (formerly ePortfolio™ Center).
  • The   Graduate Student Advisory Committee  is an organization open to all students in the Educational Measurement and Statistics program that encourages student input on departmental activities and seminars, provides useful information about upcoming events, and promotes student interaction and involvement.
  • Iowa Testing Programs   develop the Iowa Tests of Basic Skills and the Iowa Tests of Educational Development. The programs also provide analytic and consultative services associated with the interpretation and use of test scores. These activities support a number of unique opportunities for student research.
  • The   University Libraries   include the Main Library, the Psychology Library, the Health Sciences Library, and the Business Library.

Dare to Discover banners hanging in Iowa City

Five UI College of Education students selected for OVPR’s 'Dare to Discover' campaign

Erick Akuoko poses in Lindquist.

Scholarship spotlight: Eric Akuoko

Professor Walter Vispoel stands outside of Lindquist

Long time educational measurement faculty member, Vispoel, retires

Program coordinator.

Jonathan Templin E.F. Lindquist Chair in Measurement and Testing and Professor in Educational Measurement and Statistics 224B1 Lindquist Center 319-335-6429 [email protected]

Please contact: Jonathan Templin 319-335-6429 [email protected]

Home

Department of Education and Psychology

Measurement and evaluation.

.

Goal / Aim / Objectives: 

The overall goal of the programme is to provide individuals with adequate graduate knowledge of educational measurement, research, statistics and evaluation methodologies to be able to teach and provide measurement and evaluation (assessment) services in relevant institutions and organisations with confidence.

The objectives of the programmes are to equip the graduate to acquire:

  • Adequate understanding of theories and principles in educational measurement and evaluation thereby becoming more confident and innovative in integrating theory and practice to promote scientific uses of measurement within the field of education and related disciplines.
  • Professional competencies and skills to teach assessment, educational statistics, measurement and evaluation and research methods courses at appropriate levels
  • Professional skills to undertake research in the area of assessment, measurement and evaluation. 
  • Skills to lead in educational assessment, measurement and evaluation in the regional and district education offices, colleges of 
  • Education and other educational institutions.

Career Opportunities: 

  • Ministry of Education and Ghana Education Service 
  • College of Education (Tutors)
  • Security Services, NGOs 
  • Training Units of institutions

Programme Structure

First semester.

The course is to enable students gain computer knowledge to complete their thesis and oral examination. It is also, to equip students with skills in ICT that they may need to teach in their various areas of specialization. The course is intended to equip students with computer literacy to help them improve in the presentation and teaching of Home Economics Education. Data management tools such as MS Excel and MS Access as well as Presentation tool such as Power Point would be explored.

The course discusses various relevant learning theories and their implications for classroom practice. It aims at exposing students to the challenges in the learning environment and how effective teaching could be enhanced using the theories as the basis. Topics such as motivation and other approaches to learning would equip students with various competencies, skills and strategies for classroom teaching.

This course exposes students to the nature and characteristics of psychological tests, the selection of good tests for counselling purposes and the administration, scoring and interpretation of psychological tests for counselling purposes.

An elementary knowledge of statistics, including the use of SPSS is required for this course. The focus of the course is the application of statistical methods to educational problems. Emphasis is on the normal, t, chi-square and F distribution. Hypothesis testing and one-way analysis of variance will be treated.

This course exposes a range of issues and practices in educational inquiry and research. Procedures and methodologies necessary to pursue research problems in meausrement, evaluation and applied statistica are investigated. There is an overview of educational research methods including validity and reliability and validity of data and practical considerations in planning, conducting and disseminating research outcomes and improving research. The course provides students with skills and knowledge needed for qualitative and quantitative inquiry as well as critiques of research.

This course examines both theoretical and practical issues in students’ assessment. It discusses the nature and relevance of assessment in the teaching and learning processes and examines extensively the theoretical and practical issues of validity and reliability of assessment results. Principles and guidelines for crafting various teacher-made tests and standardized tests are also discussed. Professional responsibilities, appropriate ethical behaviour of educators in assessment as well as legal requirements in educational assessments are presented and discussed.

The course exposes students to a variety of approaches to planning and conducting educational evaluations. It also provides practical guidelines for general evaluation approaches in education. The role of evaluation in improving education, basic concepts and procedures for evaluating educational programmes in applied settings and alternative views of evaluation conducting and using evaluations are examined. Students will be expected to carry out a mini project on educational evaluation and present the evaluation report.

The prerequisite of this course is EPS 853 Assessment in Schools. The course exposes students to the practical aspects of classroom test construction, administration in a school setting and classical item analysis to concretize their competency and skills in ability/ability test construction. The student is expected to work under guidance to review and discuss relevant assessment literature and present assignments related to item construction as well as present, at least two projects; one on multiple-choice test and the other on constructed response items.

Students enrolled in this course should have taken EPS 851, Intermediate Statistics and EPS 858, Inferential Statistics. This course is intended to equip students with the ability to analyse data. The course will further equip the student with the skills of choosing the right statistical test to use, how to use it and when to use it. Both parametric and nonparametric tests will be studied. Interpretation of outputs from SPSS computer analysis of the various statistical tests is highlighted. The major topics treated include factorial designs, randomized block and split-plot designs, analysis of covariance, non-parametric statistical methods and factor analysis.

This course is intended to enhance students’ knowledge and competency in constructing achievement tests and interpreting the test scores. Topics for the course include test theory, classical true score theory, reliability, validity, standard setting, classical item analysis, test equating theory and fundamentals of item response theory.

In this course, the student will critically examine issues related to special education assessment policies, the selection of appropriate assessment instruments (e.g. the use of formal and informal assessment techniques), the role of the multi-disciplinary team, and parental involvement. More emphasis will be placed on assessment procedures in Ghana. At the end of the course, the student should be able to adopt appropriate assessment procedures that meet the needs of the child.

The course is designed to reinforce and further develop the writing skills of students at the postgraduate level. It is centred on the notion that the future success in postgraduate work is dependent to some extent, on the individual student’s ability to demonstrate understanding and insights regarding diverse forms of academic writing. This course will further provide students, nuanced understanding with respect to linguistic/ rhetorical theoretical underpinnings, features of academic writing and the requisite skills regarding argumentation and research centred-writing.

Second Semester

This course examines classical test theory and the application to the practice of assessment. At a foundational level, model assumptions are explored and used to understand the development of different notions of reliability and dependability. At a practical level, statistical techniques developed from the theory are applied to develop and/or improve assessment practices. Topics treated include true score model of classical test theory and its assumptions and properties, similarities and differences between parallel, tau-equivalent, essentially tau-equivalent, and congeneric tests, methods for estimating reliability and validity. An item and reliability analysis will be conducted and results used to develop a new test or improve an existing one.

This course focuses on the construction and use of measures of cognitive achievement and ability. Topics include test planning, item writing, test try-out, item analysis, reliability, validity, criterion-referencing, norm-referencing, item banking, and aptitude test design. Students write items, critique items written by others, construct tests, try out and revise tests, and develop test manuals to document the process of test development and the quality of their tests. Students will also set standards for the tests constructed. Issues on the principles and practice of school-based assessment are discussed.

In this course students are introduced to the design and statistical principles of the experimental approach to educational research with particular emphasis on the appropriate analysis of data arising from designed experiments. A variety of experimental designs, their advantages and disadvantages, estimation of treatment effects, and significance testing are treated. Various types of analysis of variance are introduced from the general linear models framework. Both univariate and multivariate procedures of data analysis including ANOVA, ANCOVA and MANOVA are covered for within and between subjects’ analysis as well as factorial and nested designs. Assumptions underlying the appropriate use of each of the procedures and the interpretation of the results are also covered.

The course deals with the statistical processes of determining comparable scores on different forms of a test and adjusting for test difficulty differences so that only real differences in performance are reported. It deals with concepts of equating including equating properties, equating designs, equating methods, equating error and statistical assumptions necessary for equating. Computation of equating functions and the interpretation of results from equating analyses are covered. The importance of equating to test development and quality control measures are discussed. The use and interpretation of relevant statistical software will also be treated. Students will design a reasonable and useful equating study and conduct equating in a realistic testing situation.

This course focuses on the design, development, and implementation of performance-based assessment. Task analysis and design, scoring scheme development and use, and assessment deployment, are covered through critique and practice. Emphasis is on a description of performance and portfolio assessment within the larger continuum of assessment methods, advantages and disadvantages of performance-based assessment versus other assessment methods, authentic and alternative assessment, and reliability and validity issues. Students will design and create a complete, packaged performance assessment.

The course is in two parts. The first part focuses on a critical examination of various scholars' theoretical perspectives on fundamental issues in evaluation practice. The course is an advanced study of programme evaluation in education and related fields, including investigating its purposes and procedures, with attention to settings, personnel, and performance; review of principal theories; and study of models, histories, political contexts, ethics, and the nature of evidence. The second part focuses on the application of evaluation theories and models in answering questions in education and dealing with educational problems. Students will deal with clarifying an evaluation request and responsibilities, setting boundaries and analysing an evaluation context, identifying and selecting the evaluative questions and criteria, planning the information collection, analysis and interpretation. It also deals with developing a management plan for the evaluation, collecting evaluation information, analysing, interpreting, reporting and using evaluation information and conducting meta-evaluations.

The course exposes a range of issues and practices in educational inquiry and research. Procedures and methodologies necessary to pursue research problems in measurement, evaluation, and applied statistics are investigated. There is an overview of educational research methods including validity and reliability of data and practical considerations in planning, conducting and disseminating research outcomes and improving research. The course provides students with skills and knowledge needed for qualitative and quantitative inquiry as well as critiques of research. An overview of appropriate statistical tests in quantitative inquiry, including univariate and multivariate statistics, are covered.

This course provides the basics of item response theory and examines the use of Item Response Theory (IRT) models for test construction and ability estimation. Concepts, models and features are discussed. The item characteristic curve and the estimation of parameters, test characteristic curves, ability estimation and item and test information functions are treated. Assessment of model fit and efficiency functions are described. Models for tests with dichotomous items are covered and discussions include advantages and disadvantages of IRT relative to Classical Test Theory. Models for tests with polytomous and mixed items are covered. Other topics include the detection of differential item functioning (or item bias), and the role of IRT in Computer Adaptive Testing (CAT). The various approaches to assessment of item fit and a discussion of their strengths and weaknesses is made. The identification and comparison of the various software packages currently available for IRT applications are made. Use is made of at least two different packages and the outputs interpreted.

This course examines generalizability theory and the application to the practice of assessment. The course covers generalizability designs (crossed, nested, random and fixed effects) and coefficients and the contributions of the theory to performance assessment and expansion of classical reliability theory. There is a demonstration of how generalizability theory is used for validation studies. A generalizability study will be conducted to determine magnitude of sources of error and results applied to improve measurement designs within an applied assessment practice context.

This course focuses on several aspects of psychological testing. It deals with the historical evolution of psychological tests and the identification and measurement of traits. The item design, strengths and limitations, use, interpretation and adaptation to local, administration and analysis of psychological tests is covered. Aptitude tests, intelligence tests such as Stanford-Binet and Wechsler Intelligence Scales and personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI), and Myers-Briggs Type Indicator (MBTI) are treated. For each of the tests, the construction, strengths and limitations, use, interpretation and adaptation to local conditions are discussed.

Individual guidance and direction in the choice of a problem area is provided by an assigned supervisor. The student is involved in reviewing literature, collecting and analysing data and presenting a final report.

  • Bibliography
  • More Referencing guides Blog Automated transliteration Relevant bibliographies by topics
  • Automated transliteration
  • Relevant bibliographies by topics
  • Referencing guides

Dissertations / Theses on the topic 'Performance measurement and evaluation'

Create a spot-on reference in apa, mla, chicago, harvard, and other styles.

Consult the top 50 dissertations / theses for your research on the topic 'Performance measurement and evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

Fowke, Robert Andrew. "Performance Measures for Managerial Decision Making: Performance Measurement Synergies in Multi-Attribute Performance Measurement Systems." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/164.

Tangen, Stefan. "Evaluation and revision of performance measurement systems." Doctoral thesis, KTH, Production Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-19.

Performance measurement is a topic that has received considerable attention during the last decades. There are many motives for using performance measures in a company but perhaps the most crucial one is that they will help to improve productivity when used properly. Productivity is of vital importance to a company’s ability to compete and make profits over time. A company that is not able to efficiently utilise its resources in creating value for its customers will not survive in the competitive business environment of today.

However, the development of fully functional and suitable performance measurement systems (i.e. set of measures) has proven to be a very challenging task. This research has focused on the last phase of the development of performance measurement system, namely the continuous updating of the performance measures, which still have not been explored in a satisfactory manner. The objective is to investigate and clarify how to evaluate and revise performance measurement systems. In order to reach this objective, several obstacles that contribute to the complexity of the research area are treated.

In the beginning, the thesis thoroughly investigates the confusing terminology within the field and frequently used terms like productivity, profitability, performance, efficiency and effectiveness are explained. Then, a categorisation of ways to measure performance is presented along with advantages and shortcomings of different productivity and other performance measures. Several key-factors found to affect the productivity of a manufacturing company are also discussed, such as: design of processes and equipment, disturbances and losses, management and control, product design, and job design and work organisation. Much attention is given to the different requirements that performance measurement systems must fulfil, both on the system level and the measure level. Finally a method called the performance measurement progression map is finally proposed, which has been developed in order to give measurement practitioners a comprehensive guide of how to evaluate and revise performance measurement systems.

The thesis is concluded with the results from several empirical investigations in which the usefulness of the developed method is validated.

Keywords: Performance measurement, Performance measurement systems, Productivity, Evaluation

齊海杰 and Haijie Qi. "Comprehensive performance measurement method for supply chains." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31226644.

Leighty, James E. "Criteria for evaluation United States Marine Corps installation strategic management." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA401388.

Roberts, L. R. "Performance measurement and evaluation in the manufacturing sector." Thesis, Swansea University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638684.

Scholtz, Reginald. "The manufacturing performance measurement matrix model." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/969.

Kanz, Cindy Lynn. "The measurement of process control in individual performance evaluation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ35901.pdf.

Davis, Rodney T. "Measurement based performance evaluation of a segmental concrete bridge /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Sadanaga, Dean A. "Performance evaluation of integrated METOC measurement system supporting Naval Operations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA374406.

Clark, L. Altyn. "Development, application, and evaluation of an organizational performance measurement system." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-05222007-091424/.

Huang, Kuang-Chung. "Development of an integrated manufacturing performance measurement and evaluation framework." Thesis, Cranfield University, 2000. http://dspace.lib.cranfield.ac.uk/handle/1826/3964.

Janc, Artur Adam. "Network Performance Evaluation within the Web Browser Sandbox." Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/112.

Sherikar, Amruth, Suresh Jange, and S. L. Sangam. "Performance measurement of quality services in academic and research libraries in India." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/105669.

Lee, Ho. "Evaluation of the performance of loop detectors and freeway performance measurement from loop detectos." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1413381609.

Laeeq, Muhammad Nadeem. "Performance evaluation of dryer units used in diesel emission measurement systems." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4208.

Patel, Bhaskar. "Performance measurement and evaluation of supply chain : the Indian automobile industry." Thesis, University of the West of England, Bristol, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409863.

Gibson, Elizabeth Carole. "A Measurement System for Science and Engineering Research Center Performance Evaluation." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/3285.

Jansen, D. E. F. "A proposed performance management system for the greater Stellenbosch Municipality." Thesis, Peninsula Technikon, 2003. http://hdl.handle.net/20.500.11838/2104.

Morales-Montejo, Clemencia. "A systems study of the scope and significance of evaluation methodologies in the management of organisations in Colombia." Thesis, University of Lincoln, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312901.

De, Jager Nicolaas Fourie. "Evaluation of the balanced scorecard system within a steel organisation in South Africa / Nicolaas Fourie de Jager." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4791.

Walters, Jeromie L. "Online Evaluation System." University of Akron / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=akron1113514372.

McGowan, Kenneth. "Measurement and evaluation of the performance of an integral abutment bridge deck." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3943.

Erasmus, Juanita Esther. "A performance measurement model incorporating 360-degree evaluation of corporate values / Juanita Esther Erasmus." Thesis, North-West University, 2007. http://hdl.handle.net/10394/746.

Draai, Kevin. "A model for assessing and reporting network performance measurement in SANReN." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/16131.

Mushrush, Christopher D. "Development of an assessment process for the evaluation of contractor performance measurement baselines." Master's thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-02022010-020239/.

Shade, Benjamin C. "A performance evaluation of the MEMS an on-road emissions measurement system study /." Morgantown, W. Va. : [West Virginia University Libraries], 2000. http://etd.wvu.edu/templates/showETD.cfm?recnum=1592.

Aluguri, Tarun. "Performance Evaluation of OpenStack Deployment Tools." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13388.

Roditis, Ioannis Stavros 1960. "A PERFORMANCE EVALUATION OF THE INDICATOR KRIGING METHOD ON A GOLD DEPOSIT: A COMPARISON WITH THE ORDINARY KRIGING METHOD." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275482.

Oztorul, Guliz. "Performance Evaluation Of Banks And Banking Groups: Turkey Case." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613821/index.pdf.

Gupta, Shruti. "Performance Analysis of Quantitative Bone Measurement with Spiral, Multi-Detector CT Scanners." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1229708472.

Flavell, Drew G. Dorwin Timothy E. "Evaluation of the Space and Naval Warfare Systems Command (SPAWAR) Cost and Performance Measurement /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA373342.

Ashton, Paul. "The interaction network : a performance measurement and evaluation tool for loosely-coupled distributed systems." Thesis, University of Canterbury. Computer Science, 1992. http://hdl.handle.net/10092/7958.

Flavell, Drew G., and Timothy E. Dorwin. "Evaluation of the Space and Naval Warfare Systems Command (SPAWAR) Cost and Performance Measurement." Thesis, Monterey, California: Naval Postgraduate School, 1999. http://hdl.handle.net/10945/39415.

Xu, Bing. "Multidimensional approaches to performance evaluation of competing forecasting models." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4081.

Purello, Michael. "Development of a computerized information system which supports measurement and assessment of AEGIS Combat Systems Center performance." Master's thesis, This resource online, 1997. http://scholar.lib.vt.edu/theses/available/etd-12232009-020334/.

Nichols, III James G. "Measurement of Windows Streaming Media." Digital WPI, 2004. https://digitalcommons.wpi.edu/etd-theses/237.

Hedler, Francielly. "Global warehouse management : a methodology to determine an integrated performance measurement." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI082/document.

OLIVEIRA, THIAGO BARRA VIDAL DE. "METROLOGICAL EVALUATION OF THE V-CONE TYPE METER PERFORMANCE FOR WET GAS FLOW RATE MEASUREMENT." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32997@1.

Watson, Yvonne M. "Federal Managers' Use of Evidence (Performance Measurement Data and Evaluation Results)| Are We There Yet?" Thesis, The George Washington University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423761.

Understanding federal managers’ use of evidence (performance measurement data and evaluation results) to inform decision-making is an important step to develop concrete strategies to remove barriers to use and increase use. The goals of this research are to: 1) explain the extent to which senior level managers and executives in federal agencies use performance measurement data and evaluation findings and results to inform decision-making; 2) understand the factors that influence use of evidence to inform decision-making; and 3) explore strategies to enhance the use of evidence.

The study employs a case study approach focusing on four federal agencies whose managers’ exhibit varying degrees of success in utilizing evidence (e.g., performance measurement data and program evaluation results). The four case study agencies that are the subject of the study are: United States Agency for International Development (AID), Department of Treasury (Treasury), the Small Business Administration (SBA), and Department of Transportation (DOT). The study relied on publicly available secondary data sources that were supplemented by document reviews and interviews with a small number of key informants.

The findings indicate that performance measurement use occurs within the four case study agencies, however, it’s use declined from 2007 to 2017 for SBA, DOT and Treasury. Although a decline in use for some categories was evident in AID, other types of use increased. The results indicate that nearly 40% or more of respondents for the case study agencies use performance measurement data to inform decisions related to program strategy, problem identification and improvements and personnel performance related issues.

The data also suggest an important distinction and nuance associated with different levels of management who use performance information, as well as specific types of use. For example, the agency’s top leaders and first line supervisors are more likely to use performance measurement data. However, an organization’s middle management tends to be less likely to use data to inform decisions regarding changes to the program.

The most common factors that influence performance information use across the four case study agencies include: manager perceptions about who pays attention to performance information, the lack of incentives and the perceived authority (or lack of) to make changes to improve the program. In addition, access to timely and readily available data, information technology and or systems capable of providing the needed data, access to training, and staff knowledge and expertise to develop performance measures and conduct evaluations were found to influence the use of performance measurement.

In general, there is an overall decline in the percentage of managers who report an evaluation of their program was conducted from 2013 to 2017 in all four case study agencies. Despite this decline, over 50% of AID managers were aware of an evaluation that was conducted within the past five years. The lower responses reported by DOT (28%), SBA (32%) and Treasury (34%) is consistent with the absence of robust program evaluation efforts. In 2017, managers at AID, SBA and Treasury report using program evaluation results to implement changes to improve program management or performance, while AID, DOT and Treasury managers report using program evaluation to assess program effectiveness, value or worth.

Hammar, Jenny, and Erik Lagerborg. "Performance measurement and evaluation for cultural non-profit organisations : A model developed for Swedish museums." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-35859.

Leo, Terrance. "The development of a financial performance measurement framework for South African education institutions." Thesis, Port Elizabeth Technikon, 2003. http://hdl.handle.net/10948/217.

Jansen, van Vuuren Eugene. "Developing a performance measurement system for policing : South African Police Service." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51683.

Beange, Kristen. "Validation of Wearable Sensor Performance and Placement for the Evaluation of Spine Movement Quality." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38698.

Davis, Ethan J. "Vehicle dynamics measurement system for evaluating older driver performance." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0014284.

Mou, Merry. "Evaluating a TCP model-based network performance measurement method." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113177.

Cortis, Natasha. "Challenging the "new accountability"? service users' perspectives on performance measurement in family support /." Connect to full text, 2006. http://hdl.handle.net/2123/1913.

Dunbar-Isaacson, Hazel. "An investigation into the measurement invariance of the performance index." Thesis, Link to the online version, 2006. http://hdl.handle.net/10019/534.

Eilard, Hillevi, and Albina Iljasov. "The Use of Social Impact Measurements in Socially Entrepreneurial Organizations - A Quantitative Survey Study on Organizational Size." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23983.

Rodoplu, Umut. "An Approximation Method For Performance Measurement In Base-stock Controlled Assembly Systems." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1260467/index.pdf.

DeGroff, Amy S. "New Public Management and Governance Collide: Federal-Level Performance Measurement in Networked Public Management Networks." Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29654.

IMAGES

  1. (PDF) Educational Measurement and Evaluation And Essentials of

    dissertation on educational measurement and evaluation

  2. Ph.D. / Educational Research, Measurement, and Evaluation

    dissertation on educational measurement and evaluation

  3. Educational Measurement: From Foundations to Future

    dissertation on educational measurement and evaluation

  4. Educational Measurement, Assessment and Evaluation by Efraín Suárez

    dissertation on educational measurement and evaluation

  5. (DOC) EDU 508: Educational Measurement and Evaluation Reflection Paper

    dissertation on educational measurement and evaluation

  6. Measurement Evaluation Measurement and Evaluation Measurement Gathering

    dissertation on educational measurement and evaluation

VIDEO

  1. बीएड 4th सेमेस्टर : Paper-3: Educational Measurement Evaluation and Statistics :Part 02

  2. Chapter 4- How to report results in thesis?

  3. Doctor of Education major in Educational Management and Planning Dissertation Dissertation Defense!

  4. measurement and Evaluation // types of evaluation

  5. Literature, Relevant, Quality, Identification/Evaluation/Documentation, 22nd December 2020 Lecture

  6. Concept of Measurement and educational measurement, #measurement #education #educationalmeasurement

COMMENTS

  1. Educational Measurement and Research Theses and Dissertations

    Theses/Dissertations from 2014. PDF. Documenting Elementary Teachers' Sustainability of Instructional Practices: A Mixed Method Case Study, Bridget Cotner. PDF. Validation of the Scores of the Instructional Pedagogical and Instructional Student Engagement Components of Fidelity of Implementation, Sandra F. Naoom.

  2. Educational Inquiry, Measurement, and Evaluation Theses and Dissertations

    Educational Inquiry, Measurement, and Evaluation Theses and Dissertations Theses/Dissertations from 2023 PDF Evaluating the Impact of Math Self-Efficacy, Math Self-Concept, and Gender on STEM Enrollment and Retention in Postsecondary Education, Marcia Bingham PDF

  3. Browse Theses, Dissertations, or other Student Work By Discipline

    Browse Theses, Dissertations, or other Student Work By Discipline - EDUCATIONAL RESEARCH, MEASUREMENT AND EVALUATION Select the discipline you would like to browse OTHER, OTHER, Accounting, African American and African Diaspora Studies, Anthropology, Applied Geography, Applied Statistics, Art, Art History, Arts Administration, Biochemistry,

  4. PDF Test, measurement, and evaluation: Understanding and use of the ...

    Test, measurement, and evaluation are concepts used in education to explain how the progress of learning and the final learning outcomes of students are assessed. However, the terms are often misused in the field of education, especially in Ghana.

  5. Using reliability and item analysis to evaluate a teacher-developed

    He has a postgraduate degree in Educational Measurement and Evaluation from the University of Cape Coast, Ghana. Kwamina is interested in conducting classroom assessment-related research and promoting quality of education interventions. ... ( MSc thesis). Florida Atlantic University, Boca Raton, FL. Retrieved June 14, 2015 from www.physics.Au ...

  6. The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation

    The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation Edited by: Bruce B. Frey Publisher: SAGE Publications, Inc. Publication year: 2018 Online pub date: June 05, 2018 Discipline: Education Methods: Educational research, Measurement DOI: https:// doi. org/10.4135/9781506326139 Keywords:

  7. PDF Theoretical Framework for Educational Assessment: A Synoptic Review

    dissertations were scrutinized in an attempt to find a comprehensive definition of the concept of assessment. These ... and educational measurement that there seems to be no consensus on what precisely it means" (p.6). Brown (2004) ... Evaluation refers to the process of arriving at judgments about abstract entities such as programs,

  8. (Pdf) Educational Measurement and Evaluation and Its Impact on

    Educational measurement and evaluation are inevitable in the teaching-learning process, especially now that the academic assessment of students is becoming technologically advanced. It plays a ...

  9. (PDF) Measurement and Evaluation in Education

    Educational measurement and evaluation refers to the use of educational assessments and the analysis of data such as scores obtained from educational assessments to understand the...

  10. Educational Research, Measurement, and Evaluation, Ph.D

    For the Ph.D. in Education Research, Measurement, and Evaluation, the dissertation may be quantitative, qualitative, or mixed methods. Whatever type of design, it must adhere to current standards for quality as reflected in professional writing on the chosen method of research design and reflected in the current literature.

  11. Assessment and Evaluation in Educational Contexts

    Over the years, the assessment/evaluation paradigm has shifted from a focus on measurement towards a focus on efforts to improve learning (Wyatt-Smith 2014).In an international review undertaken by the OECD, experts from 28 countries agreed that the ultimate objective of assessment and evaluation is to improve the quality of education in countries and, as a consequence, raise student outcomes ...

  12. Educational Measurement and Evaluation Project Topics and Papers

    Educational Measurement and Evaluation Research Papers/Topics in Educational Measurement and Evaluation Quality and Productivity of Teachers in Selected Public Primary Schools in Apac District Uganda

  13. Ph.D. in Research, Measurement, and Evaluation

    Overview. The curriculum of the Ph.D. in RME is structured around six components: (A) a core set of 36 credits (12 courses of 3 credits each) of required coursework covering the fundamentals of research design, measurement, and statistical analysis; (B) 6 credits of a research apprenticeship, in which students conduct mentored research under ...

  14. Dissertations / Theses: 'Measurement and Evaluation'

    In recent years, Sweden has adopted a criterion-referenced grading system, where the grade outcome is used for several purposes, but foremost for educational evaluation on student- and school levels as well as for selection to higher education. This thesis investigates the consequences of using criterion-referenced measurement for both ...

  15. Evaluation, Measurement and Statistics Specialization

    The exam consists of two parts. The first part is based on the course work in EMS, covering topics in statistical methodology, educational measurement, and evaluation. The second part is based on the dissertation topic area. The student, with consultation from the adviser, develops a reading list for the specialization examination.

  16. Ph.D. in Educational Research, Measurement, and Evaluation

    Overview. The Ph.D. in Educational Research, Measurement, and Evaluation is designed for individuals who are interested in becoming an expert in research methodology, measurement, applied statistics, or program evaluation.. The program targets experienced educators who hold a master's degree in a related educational field. Career opportunities may be found in a wide variety of educational ...

  17. PDF Critical Areas of Measurement and Evaluation in Education

    Educational measurement and evaluation are integral parts of teaching and learning processes, that examine the appropriateness of teaching methods, relevance of curriculum contents and the quality of learning out-comes (Ikoro and Opa, 2011). Nworgu (2003) defined Measurement and Evaluation separately. ...

  18. Dissertations / Theses: 'Educational Assessment, Evaluation, and

    Dissertations / Theses on the topic 'Educational Assessment, Evaluation, and Research' To see the other types of publications on this topic, follow the link: Educational Assessment, Evaluation, and Research. Author: Grafiati Published: 4 June 2021 Last updated: 26 January 2023

  19. PDF Harvard Graduate School of Education

    Education Policy and Program Evaluation. Thesis: Essays on measurement and causal inference in education. A. Ho, ... Mark Chin, Education Policy and Program Evaluation. Thesis: Three Essays on Educational Policy and Equity. M. West, D. Ang, D. Deming. Mariam Dahbi, ... Education Policy and Program Evaluation. Thesis: Recruiting and Preparing ...

  20. How My Dissertation Came to be through ESM's Support and Guidance

    In addition to my PhD in ESM, I also completed graduate certificates in Women, Gender, and Sexuality and Qualitative Research Methods in Education. While I was a part-time graduate student, I also worked full-time as an evaluation associate at Synergy Evaluation Institute, a university-based evaluation center. By day, I worked for clients ...

  21. Educational Measurement and Statistics

    Inter-disciplinary measurement and assessment activities, as well as international activities, may be pursued when they relate to the primary mission. The Center for Evaluation and Assessment provides consulting services on campus and across the nation in the assessment of college outcomes and the evaluation of educational and social programs ...

  22. MEASUREMENT AND EVALUATION

    The objectives of the programmes are to equip the graduate to acquire: Adequate understanding of theories and principles in educational measurement and evaluation thereby becoming more confident and innovative in integrating theory and practice to promote scientific uses of measurement within the field of education and related disciplines.

  23. Dissertations / Theses: 'Performance measurement and evaluation'

    This thesis highlights the importance of performance measurement and evaluation in the management information and control system. Increasing global competition requires companies to pay more attention to the external environment in terms of satisfying customer's requirements and surpassing competitor's performance.

  24. Ph.D. in Leadership in Education Dissertation Defense: Elizabeth

    The School of Education invites you to attend a doctoral dissertation defense by Elizabeth Farmosa "Evaluating an Honors College at a Large Public University: Factors for Honors Retention." Candidate: Elizabeth Farmosa Degree: Doctoral- Research & Evaluation Defense Date: Thursday, March 28, 2024 Time: 11:30 a.m.

  25. Measurement and Evaluation

    The overall goal of the programme is to provide individuals with adequate graduate knowledge of educational measurement, research, statistics and evaluation methodologies to be able to teach and provide measurement and evaluation (assessment) services in relevant institutions and organisations with confidence.