Search results
1 – 10 of over 150000Allan H. Church, Christopher T. Rotolo, Alyson Margulies, Matthew J. Del Giudice, Nicole M. Ginther, Rebecca Levine, Jennifer Novakoske and Michael D. Tuller
Organization development is focused on implementing a planned process of positive humanistic change in organizations through the use of social science theory, action research, and…
Abstract
Organization development is focused on implementing a planned process of positive humanistic change in organizations through the use of social science theory, action research, and data-based feedback methods. The role of personality in that change process, however, has historically been ignored or relegated to a limited set of interventions. The purpose of this chapter is to provide a conceptual overview of the linkages between personality and OD, discuss the current state of personality in the field including key trends in talent management, and offer a new multi-level framework for conceptualizing applications of personality for different types of OD efforts. The chapter concludes with implications for research and practice.
Details
Keywords
The purpose of this paper is to explore the enacted mental models, the types of thinking and action, of assessment held by faculty and staff in higher education.
Abstract
Purpose
The purpose of this paper is to explore the enacted mental models, the types of thinking and action, of assessment held by faculty and staff in higher education.
Design/methodology/approach
This research approaches the question: in what ways are “learning outcomes assessment” understood (thinking) as part of a system and assessed in the individual’s work (practice)?” Interviews and concept maps were used to identify influences, descriptions of actions, and connections to environments for 12 participants, known to have engaged in learning outcomes assessment.
Findings
By connecting individual perspectives to broader organizational understanding, a goal of this research was to identify and analyze how educators understand and practice learning outcomes assessment in higher education. Influences on assessment presented in the literature are confirmed and several behavioral types are defined and categorized.
Research limitations/implications
The findings focus attention on the ways individuals act on influences in systems of higher education. The findings yield opportunities for new ways to utilize assessment knowledge. The study is small and has implications for similar type institutions.
Practical implications
Faculty and staff can use these findings to create training and development protocols and/or adjust their own practices of assessment. Assessment professionals can apply findings to consulting on an array of assessment projects and with staff who have varying skill levels.
Social implications
The ways in which assessment is practiced is deeply influenced by training but is also shaped heavily by current environments and accountability structures. Policies and practices related to such environments can make a difference in preparing for scaled-up assessment practices and projects.
Originality/value
This research offers insight into possible archetypes of assessment behaviors and presents applied influences on assessment.
Details
Keywords
Allan H. Church, Lorraine M. Dawson, Kira L. Barden, Christina R. Fleck, Christopher T. Rotolo and Michael Tuller
Benchmark surveys regarding talent management assessment practices and interventions of choice for organization development (OD) practitioners have shown 360-degree feedback to be…
Abstract
Benchmark surveys regarding talent management assessment practices and interventions of choice for organization development (OD) practitioners have shown 360-degree feedback to be a popular tool for both development and decision-making in the field today. Although much has been written about implementing 360-degree feedback since its inception in the 1990s, few longitudinal case examples exist where interventions have been applied and their impact measured successfully. This chapter closes the gap by providing research findings and key learnings from five different implementation strategies for enhancing 360-degree feedback in a large multi-national organization. Recommendations and implications for future research are discussed.
Details
Keywords
Daniel L. Pearce and Wolfram E. Verlaan
Purpose – To provide a resource for educators and graduate students that contains information about using formal assessment data to plan literacy instruction and…
Abstract
Purpose – To provide a resource for educators and graduate students that contains information about using formal assessment data to plan literacy instruction and intervention.
Design/methodology/approach – Several aspects of formal assessment are presented, including a definition of formal assessment, types of formal assessment scores, commonly used formal assessments, and recommendations for using formal assessments for individuals and groups. Information about formal assessment is informed both by documented sources and the experiences of the authors.
Findings – The authors provide an overview of common, commercially available assessments designed to measure literacy achievement in either individuals or groups. Reviews of formal assessments include scores, number of forms, literacy domains measured, and published reliability figures. Recommendations for formal assessment use include using assessment data to plan instruction and intervention for both individuals and groups. In addition, a case study is presented demonstrating the efficacy of using formal assessment data to plan instruction and intervention in a K-6 elementary school in the United States.
Research limitations/implications – The review of commercially available individual and group literacy assessments does not constitute an exhaustive list.
Practical implications – Information about formal assessments, assessment score types, and formal assessment uses is consolidated in one location for easy access by graduate students and other educators.
Originality/value – This chapter provides graduate students and others in the field of education an overview of formal assessments and how formal assessment data can be used to make instructional decisions for both individuals and groups.
Details
Keywords
Amina Aouine, Latifa Mahdaoui and Laurent Moccozet
The purpose of this paper is to focus on assessing individuals’ problems in learning groups/teams and should lead to the assessment of the group/team itself as a learning entity.
Abstract
Purpose
The purpose of this paper is to focus on assessing individuals’ problems in learning groups/teams and should lead to the assessment of the group/team itself as a learning entity.
Design/methodology/approach
In this paper, an extension of the IMS-Learning Design (IMS-LD) meta-model is proposed in order to support the assessment of collaborative activities in e-learning. Besides, the software architecture which consists of a set of components forming a web wizard to create, track and assess the collaborative assessment processes is described as to support that extension of the IMS-LD meta-model.
Findings
With the proposed solution we can: make assessment fairer using individual and collective assessment indicators to assign final scores to learners; make an assessment step by step for better individual and collective monitoring activities; and divide the assessment into lighter phases for the correctors. Consequently, the evaluator will have more detailed information about his/her students and the quality of judgment will be better. This could also be useful for the evaluator in order to plan further examinations.
Research limitations/implications
Further experimentations are necessary to test the effectiveness of the proposed system in order to analyze its performances under a massive usage. In addition, the authors plan to use a survey to collect learners’ opinions to know the effectiveness of the proposal in terms of fairness in the assessment of collaborative activities in an online community.
Originality/value
This paper addresses important issues in the educational area, especially assessment of collaborative activities. In fact, to reduce subjectivity and increase fairness in assessing learners in collaborative work, for example, using the peer assessment, in order to try reducing subjectivity and fairly assessing learners. However, while assessing group work, the same mark is attributed for all group members and authors have concluded that it is not the right approach to make a fair and more objective assessment.
Details
Keywords
As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree course…
Abstract
Purpose
As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree course. Some of these attributes are discipline‐specific, others are generic to all professions. The development of these attributes can be promoted by the careful use of self‐ and peer assessment. The authors have previously reported using the self‐ and peer assessment software tool SPARK in various contexts to facilitate opportunities to practise, develop, assess and provide feedback on these attributes. This research and that of the other developers identified the need to extend the features of SPARK, to increase its flexibility and capacity to provide feedback. This paper seeks to report the results of the initial trials to investigate the potential of these new features to improve learning outcomes.
Design/methodology/approach
The paper reviews some of the key literature with regard to self‐ and peer assessment, discusses the main aspects of the original online self‐ and peer assessment tool SPARK and the new version SPARKPLUS, reports and analyses the results of a series of student surveys to investigate whether the new features and applications of the tool have improved the learning outcomes in a large multi‐disciplinary Engineering Design subject.
Findings
It was found that using self‐ and peer assessment in conjunction with collaborative peer learning activities increased the benefits to students and improved engagement. Furthermore it was found that the new features available in SPARKPLUS facilitated efficient implementation of additional self‐ and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. The trials demonstrated that the tool assisted in improving students' engagement with and learning from peer learning exercises, the collection and distribution of feedback and helping them to identify their individual strengths and weaknesses.
Practical implications
SPARKPLUS facilitates the efficient management of self‐ and peer assessment processes even in large classes, allowing assessments to be run multiple times a semester without an excessive burden for the coordinating academic. While SPARKPLUS has enormous potential to provide significant benefits to both students and academics, it is necessary to caution that, although a powerful tool, its successful use requires thoughtful and reflective application combined with good assessment design.
Originality/value
It was found that the new features available in SPARKPLUS efficiently facilitated the development of new self‐ and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes.
Details
Keywords
Vítor Vasata Macchi Silva and José Luis Duarte Ribeiro
This article presents an investigation of the suitability of using quantitative or qualitative data for individual competencies assessment. Specifically, the primary purpose of…
Abstract
Purpose
This article presents an investigation of the suitability of using quantitative or qualitative data for individual competencies assessment. Specifically, the primary purpose of this article is to identify if the results provided by quantitative and qualitative instruments focused on individual competencies are convergent.
Design/methodology/approach
In order to do the investigation proposed, a survey on individual competencies comprising a group of employees of the administrative area of a steel company was carried out. A total of 268 evaluations were collected and analyzed.
Findings
The analysis of the employee's performance appraisals provided by ratings and narrative comments indicates a low correlation between these assessments. The reasons for such low correlation include the qualitative assessments variability, the restricted list of competencies used in the quantitative assessments and the analytical format of quantitative assessments.
Originality/value
The study indicates that quantitative and qualitative assessments should be carried out jointly so that they can generate more comprehensive results. When the combined use is not possible, the quantitative approach is better suited for assessing performance, while the qualitative approach provides more valuable insights for boosting people development processes.
Details
Keywords
Chet Robie, Kathleen A. Tuzinski and Paul R. Bly
To gather information on assessor beliefs and behaviors in relation to assessee faking issues on a personality inventory in the individual assessment process.
Abstract
Purpose
To gather information on assessor beliefs and behaviors in relation to assessee faking issues on a personality inventory in the individual assessment process.
Design/methodology/approach
A survey approach was used in this research. Totally 77 experienced assessors who conducted individual assessments for an international consulting firm responded to the survey. Analyses of mean item rankings were used to answer several research questions.
Findings
Major results of the study were: assessors believe faking is a problem; assessors believe they can detect faking; and assessors believe they can effectively eliminate all of the effects of faking when evaluating the candidates.
Practical implications
The first implication from this research is that assessors believe that they can detect and deal with faking despite a paucity of evidence to support it. The second implication is that organizations may be reluctant to continue to develop effective methods of identifying and dealing with faking if their assessors mistakenly believe they are already successfully doing so.
Originality/value
This study is the first to survey experience assessors regarding their beliefs and perceptions of faking issues in the individual assessment process and is designed to garner immediate practical insights and ideas for future testable hypotheses.
Details
Keywords
Frederick J. Brigham, Stacie Harmer and Michele M. Brigham
Traumatic brain injury (TBI) is unique among areas of eligibility for students with disabilities in federal special education legislation, not in what is assessed, but why the…
Abstract
Traumatic brain injury (TBI) is unique among areas of eligibility for students with disabilities in federal special education legislation, not in what is assessed, but why the assessment is taking place. If not for the injury, most individuals with TBI would be unlikely to come to the attention of special educators. Few education training programs appear to allocate sufficient attention to the category, so we present background information regarding prevalence, recovery, and outcomes before summarizing advice from the literature regarding assessment of individuals with TBI in schools. Although educators are unlikely to be involved in the initial diagnosis of TBI, they can be important collaborators in promoting recovery or detecting a worsening condition. Almost every assessment tool available to educators is likely to be of value in this endeavor. These include both formal and informal approaches to assessment. Working with individuals with TBI requires sensitivity and compassion.
Details