Search results

1 – 2 of 2

Abstract

Purpose

Although medical leadership and management (MLM) is increasingly being recognised as important to improving healthcare outcomes, little is understood about current training of medical students in MLM skills and behaviours in the UK. The paper aims to discuss these issues.

Design/methodology/approach

This qualitative study used validated structured interviews with expert faculty members from medical schools across the UK to ascertain MLM framework integration, teaching methods employed, evaluation methods and barriers to improvement.

Findings

Data were collected from 25 of the 33 UK medical schools (76 per cent response rate), with 23/25 reporting that MLM content is included in their curriculum. More medical schools assessed MLM competencies on admission than at any other time of the curriculum. Only 12 schools had evaluated MLM teaching at the time of data collection. The majority of medical schools reported barriers, including overfilled curricula and reluctance of staff to teach. Whilst 88 per cent of schools planned to increase MLM content over the next two years, there was a lack of consensus on proposed teaching content and methods.

Research limitations/implications

There is widespread inclusion of MLM in UK medical schools’ curricula, despite the existence of barriers. This study identified substantial heterogeneity in MLM teaching and assessment methods which does not meet students’ desired modes of delivery. Examples of national undergraduate MLM teaching exist worldwide, and lessons can be taken from these.

Originality/value

This is the first national evaluation of MLM in undergraduate medical school curricula in the UK, highlighting continuing challenges with executing MLM content despite numerous frameworks and international examples of successful execution.

Details

Journal of Health Organization and Management, vol. 30 no. 7
Type: Research Article
ISSN: 1477-7266

Keywords

Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

400

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 2 of 2