Search results

1 – 10 of over 78000
Article
Publication date: 16 November 2015

David Greenfield, Deborah Debono, Anne Hogden, Reece Hinchcliff, Virginia Mumford, Marjorie Pawsey, Johanna Westbrook and Jeffrey Braithwaite

Health systems are changing at variable rates. Periods of significant change can create new challenges or amplify existing barriers to accreditation program credibility and…

1242

Abstract

Purpose

Health systems are changing at variable rates. Periods of significant change can create new challenges or amplify existing barriers to accreditation program credibility and reliability. The purpose of this paper is to examine, during the transition to a new Australian accreditation scheme and standards, challenges to health service accreditation survey reliability, the salience of the issues and strategies to manage threats to survey reliability.

Design/methodology/approach

Across 2013-2014, a two-phase, multi-method study was conducted, involving five research activities (two questionnaire surveys and three group discussions). This paper reports data from the transcribed group discussions involving 100 participants, which was subject to content and thematic analysis. Participants were accreditation survey coordinators employed by the Australian Council on Healthcare Standards.

Findings

Six significant issues influencing survey reliability were reported: accreditation program governance and philosophy; accrediting agency management of the accreditation process, including the program’s framework; survey coordinators; survey team dynamics; individual surveyors; and healthcare organizations’ approach to accreditation. A change in governance arrangements promoted reliability with an independent authority and a new set of standards, endorsed by Federal and State governments. However, potential reliability threats were introduced by having multiple accrediting agencies approved to survey against the new national standards. Challenges that existed prior to the reformed system remain.

Originality/value

Capturing lessons and challenges from healthcare reforms is necessary if improvements are to be realized. The study provides practical and theoretical strategies to promote reliability in accreditation programs.

Details

Journal of Health Organization and Management, vol. 29 no. 7
Type: Research Article
ISSN: 1477-7266

Keywords

Article
Publication date: 1 October 2004

Kimberly A. Dunn and H. Fenwick Huss

Two of the possible problems that may arise from gathering data by mail questionnaire are: survey recipients fail to respond, or they respond but provide unreliable data. Extant…

1569

Abstract

Two of the possible problems that may arise from gathering data by mail questionnaire are: survey recipients fail to respond, or they respond but provide unreliable data. Extant research has examined the effects of pressure to respond on increasing response rates; however, efforts to achieve a high response rate may produce unreliable data. The analysis presented in this paper examines the effects of increased pressure to respond to mail surveys on the reliability of survey responses. Our findings suggest that increased pressure to respond, decreased the reliability of information obtained. In addition, we show that personalization of the survey increases the reliability of responses. However, the conclusions concerning information suppression are not clear. The type of auditor change was a significant determinate of response reliability, but companies with greater financial distress were no more likely to give unreliable responses concerning the independent auditor changes.

Details

Managerial Auditing Journal, vol. 19 no. 8
Type: Research Article
ISSN: 0268-6902

Keywords

Article
Publication date: 21 July 2023

Andrew Asher, Kristin Briney and Abigail Goben

This article describes the development processes, sampling and analysis practices and the assessment of reliability and validity of a new 0survey that sought to evaluate…

Abstract

Purpose

This article describes the development processes, sampling and analysis practices and the assessment of reliability and validity of a new 0survey that sought to evaluate undergraduate students' perceptions and expectations related to privacy and library participation in learning analytics studies. This article provides other researchers with information required to independently evaluate the survey's efficacy, as well as guidance for designing other surveys.

Design/methodology/approach

Following question development, pre-survey validity assessments were made using subject matter expert panel review and cognitive interviews. Post-hoc analysis of survey construct reliability was evaluated using the Omega coefficient, while exploratory factor analysis was utilized to assess construct validity. Survey design limitations and potential bias effects are also examined.

Findings

The survey exhibited a high level of reliability among research constructs, while the exploratory factor analysis results suggested that survey constructs contained multiple conceptual elements that should be measured separately for more nuanced analysis.

Practical implications

This article provides a model for other researchers wishing to re-use the survey described or develop similar surveys.

Social implications

As learning analytics interest continues to expand, engaging with the subjects, in this case students, of analysis is critical. Researchers need to ensure that captured measurements are appropriately valid in order to accurately represent the findings.

Originality/value

This survey is one of very few addressing library learning analytics that has undergone extensive validity analysis of the conceptual constructs.

Details

Performance Measurement and Metrics, vol. 24 no. 2
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 25 September 2018

Sara Dolnicar

Survey research has developed to become the default empirical approach to answering research questions in the field of hospitality (and many other fields of research within the…

1122

Abstract

Purpose

Survey research has developed to become the default empirical approach to answering research questions in the field of hospitality (and many other fields of research within the social sciences). This paper aims to reflect on the use of survey research in hospitality and offers recommendations for improvement.

Design/methodology/approach

First, known dangers to validity associated with survey research are discussed. Next, a sample of studies recently published in leading hospitality journals is assessed in view of these known dangers. Finally, recommendations are offered for editors, reviewers, readers and authors to mitigate the risk of drawing invalid conclusions based on survey research.

Findings

Survey research is very common in hospitality research and is used to investigate a wide range of research questions and constructs under study. The nature of constructs studied, the answer scales used and the nature of the samples point to a substantial risk to the validity of conclusions drawn.

Practical implications

A number of risk mitigation measures are proposed that can help authors minimise the risks to validity arising from known dangers associated with survey research. These same risk mitigation measures can be used by editors and reviewers in the assessment of manuscripts and by readers to evaluate the validity of conclusions drawn in already published work.

Originality/value

The value of this study lies in reflecting from a distance on how the survey research is conducted in the social sciences in general and in hospitality research in specific. The paper reveals that some routine approaches particularly prone to undermining the validity of conclusions may have been adopted and offers a few suggestions how this risk can be mitigated.

Details

International Journal of Contemporary Hospitality Management, vol. 30 no. 11
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 29 March 2013

Bülend Terzioğlu, Elsie Chan and Peter Schmidt

The aim of this paper is to review 73 survey articles relating to information technology outsourcing (ITO) published by 17 information technology journals over the 20‐year period…

404

Abstract

Purpose

The aim of this paper is to review 73 survey articles relating to information technology outsourcing (ITO) published by 17 information technology journals over the 20‐year period 1991‐2010. The review focuses on seven attributes of survey methodology (i.e. information on research questions, pilot testing of the survey instrument, sampling method employed, sample size, response rate, nonresponse bias and internal validity) and ascertains the extent to which those attributes have been addressed. The main purpose of this study is to provide insights for researchers to help improve the data quality, and reliability of survey results.

Design/methodology/approach

Review of literature over the past 20 years (1991‐2010).

Findings

There is strong evidence that deficiencies in the administration of survey methods in ITO persist and that such shortcomings compromise rigour, and therefore need to be redressed.

Practical implications

Although this review is performed in an ITO context, findings are of interest and benefit to all survey researchers. The key contribution of this paper is that it provides up‐to‐date evidence regarding quality of survey research as it applies to ITO by identifying areas needing attention so that the integrity of survey research methodology can be maintained and it can continue to provide reliable findings for the advancement of knowledge.

Originality/value

This study provides an examination of literature dealing exclusively with an IT outsourcing survey. It can, however, serve as a guide for all survey researchers regarding the pitfalls in survey methodology.

Details

Asia-Pacific Journal of Business Administration, vol. 5 no. 1
Type: Research Article
ISSN: 1757-4323

Keywords

Article
Publication date: 15 March 2022

Jason Martin

An effective measurement of library leadership is crucial to understanding the current state of library leadership and to developing library leaders. This study sought to…

Abstract

Purpose

An effective measurement of library leadership is crucial to understanding the current state of library leadership and to developing library leaders. This study sought to validate and measure the reliability of the Martin Library Leadership survey.

Design/methodology/approach

This survey is based on the Martin Library Leadership Definition, an evidence-based definition of library leadership. The first version of the survey consisted of 28 questions plus questions on respondent and library leader demographics. Each question measured one of the three components of the definition. This version of the survey was distributed to multiple ALA listservs and after analysis 16 items were removed. The resulting 12 question version of the survey was sent to the same ALA listservs and completed by 291 librarians and library staff from various library types and library work areas. The responses were analyzed using SPSS.

Findings

Exploratory factor analysis found three factors that align with the three components of the Martin Library Leadership Definition, and questions loaded in their expected factors at least 0.7. Cronbach's alpha was used to determine internal consistency. The alpha for the entire survey was 0.956. The Martin Library Leadership survey was validated and found to be reliable.

Originality/value

The results of this study provide strong and consistent evidence the Martin Library Leadership survey is valid and can be used in further library leadership research and professional development.

Article
Publication date: 1 March 2000

B.S. Dhillon and M.A. Aleem

This paper presents the results of a survey of Canadian robot users concerning robot reliability and safety. Data on 26 questions were analyzed and the resulting findings are…

696

Abstract

This paper presents the results of a survey of Canadian robot users concerning robot reliability and safety. Data on 26 questions were analyzed and the resulting findings are presented in the form of tables, histograms, pie charts, etc. Provides conclusions including the fact that approximately 75 per cent of companies are using robots for commercial purposes; most general types of robots used in industry are intelligent robots; frequently ineffective maintenance manuals are provided by the robot manufacturers; and robot‐related problems are generally less than 50 per year.

Details

Journal of Quality in Maintenance Engineering, vol. 6 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 2 April 2019

Mahmoud AlQuraan

The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.

Abstract

Purpose

The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.

Design/methodology/approach

A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER.

Findings

The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey.

Research limitations/implications

Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions

Practical implications

The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality.

Originality/value

Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.

Details

Journal of Applied Research in Higher Education, vol. 11 no. 3
Type: Research Article
ISSN: 2050-7003

Keywords

Article
Publication date: 7 September 2021

Christian Nnaemeka Egwim, Hafiz Alaka, Luqman Olalekan Toriola-Coker, Habeeb Balogun, Saheed Ajayi and Raphael Oseghale

This paper aims to establish the most underlying factors causing construction projects delay from the most applicable.

Abstract

Purpose

This paper aims to establish the most underlying factors causing construction projects delay from the most applicable.

Design/methodology/approach

The paper conducted survey of experts using systematic review of vast body of literature which revealed 23 common factors affecting construction delay. Consequently, this study carried out reliability analysis, ranking using the significance index measurement of delay parameters (SIDP), correlation analysis and factor analysis. From the result of factor analysis, this study grouped a specific underlying factor into three of the six applicable factors that correlated strongly with construction project delay.

Findings

The paper finds all factors from the reliability test to be consistent. It suggests project quality control, project schedule/program of work, contractors’ financial difficulties, political influence, site conditions and price fluctuation to be the six most applicable factors for construction project delay, which are in the top 25% according to the SIDP score and at the same time are strongly associated with construction project delay.

Research limitations/implications

This paper is recommending that prospective research should use a qualitative and inductive approach to investigate whether any new, not previously identified, underlying factors that impact construction projects delay can be discovered as it followed an inductive research approach.

Practical implications

The paper includes implications for the policymakers in the construction industry in Nigeria to focus on measuring the key suppliers’ delivery performance as late delivery of materials by supplier can result in rescheduling of work activities and extra time or waiting time for construction workers as well as for the management team at site. Also, construction stakeholders in Nigeria are encouraged to leverage the amount of data produced from backlog of project schedules, as-built drawings and models, computer-aided designs (CAD), costs, invoices and employee details, among many others through the aid of state-of-the-art data driven technologies such as artificial intelligence or machine learning to make key business decisions that will help drive further profitability. Furthermore, this study suggests that these stakeholders use climatological data that can be obtained from weather observations to minimize impact of bad weather during construction.

Originality/value

This paper establishes the three underlying factors (late delivery of materials by supplier, poor decision-making and Inclement or bad weather) causing construction projects delay from the most applicable.

Details

Journal of Engineering, Design and Technology , vol. 21 no. 5
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 3 April 2018

Kentaro Yamamoto and Mary Louise Lennon

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement…

Abstract

Purpose

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys. This paper aims to describe how data fabrication can be understood in the context of the complex processes involved in the collection, handling, submission and analysis of large-scale assessment data. The actors involved in those processes, and their possible motivations for data fabrication, are also elaborated.

Design/methodology/approach

Computer-based assessments produce new types of information that enable us to detect the possibility of data fabrication, and therefore the need for further investigation and analysis. The paper presents three examples that illustrate how data fabrication was identified and documented in the Programme for the International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA) and discusses the resulting remediation efforts.

Findings

For two countries that participated in the first round of PIAAC, the data showed a subset of interviewers who handled many more cases than others. In Case 1, the average proficiency for respondents in those interviewers’ caseloads was much higher than expected and included many duplicate response patterns. In Case 2, anomalous response patterns were identified. Case 3 presents findings based on data analyses for one PISA country, where results for human-coded responses were shown to be highly inflated compared to past results.

Originality/value

This paper shows how new sources of data, such as timing information collected in computer-based assessments, can be combined with other traditional sources to detect fabrication.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of over 78000