//Logo Image
Author: Hanjun Lin, Yeh-Liang Hsu, Ming-Shinn Hsu, Chih-Ming Cheng (2014-09-09); recommended: Yeh-Liang Hsu (2014-09-09).
Note: This paper is published in Telemedicine and e-Health, 2014, 20(8): 748-756. doi:10.1089/tmj.2013.0261

Development of a telehealthcare decision support system for patients discharged from hospital

Abstract

Objective: This paper presents the development of a Telehealthcare Decision Support System (TDSS) for patients discharged from hospital, where symptom data are important indications of the recovery progress for patients. Symptom data are difficult to quantify in telehealthcare application scenario, because the observations and perceptions on symptoms by the patient themselves are subjective. In the TDSS, both symptom data from patients and clinical histories from the Hospital Information System are collected. Machine learning algorithms are used to build a predictive model for classifying patients according to their symptom data and clinical histories, to provide a degree of urgency for the patient to return to the hospital. Subjects and Methods: 1,467 patient cases were collected during one-year period. Symptom data and clinical histories were preprocessed into 49 parameters for machine learning. The training data of patients were validated manually with their actual clinical histories of returning to the hospital. The performance of predictive models trained by five different machine learning algorithms were evaluated and compared. Results: Bayesian Network algorithm had the best performance among the machine learning algorithms tested in this application scenario on the, and was selected to be implemented in the TDSS. On the 1,467 patient cases collected, its precision in 10-fold cross validation was 79.3%. The most important 6 parameters were also selected from the 49 parameters by feature selection. The performance of correct prediction by the TDSS is comparable to that by the nursing team at the call center. Conclusions: The TDSS provides a degree of urgency for patients to return to the hospital, and thereby assists the telehealthcare nursing team in making such decisions. The performance of the TDSS is expected to improve, as more cases of patient data are collected and input into the TDSS. The TDSS has been implemented in one of the largest commercialized telehealthcare practices in Taiwan administered by Min-Sheng General Hospital.

Keywords: telehealthcare, decision support system (DSS), machine learning, symptom data, Commercial Telemedicine, Home health monitoring, Telehealth

Introduction

Telehealthcare is an important trend of health management and care. Most telehealthcare applications focus on care either for patients with chronic diseases or for elderly patients. Min-Sheng General Hospital in Taiwan has offered a telehealthcare service platform, “Smart Care,” specifically for patients discharged from hospital. In order to most effectively utilize medical resources, doctors allow patients to be discharged from hospital if those patients have recovered sufficiently that their medical needs can be met by themselves or by a caregiver. However, such patients need to be monitored remotely, given nursing suggestions and instructions, and, if necessary, be told to return to the hospital.

Figure 1 shows the telehealthcare service platform of Smart Care. Patients discharged from the hospital can join the service on the recommendation of their doctors. Patients regularly measure vital signs at home according to the measurement prescription issued by their doctors. They then upload the measurement data and report their current health status and symptoms by home gateway or interactive voice response system. The nursing team at the call center, which is composed of professional nurses and doctors, phone patients in order to periodically assess patients' health status and address any patient concerns. The nursing team gives nursing suggestions and instructions to the patient or caregiver by drawing on their professional training and experience and with assistance from information systems.

未命名 - 1

Fig. 1. The telehealthcare service platform of Smart Care

There are special needs for such a telehealthcare system for patients discharged from hospital. The nursing team at the call center must make judgments on the need for patients to return to the hospital for further checkup, based on the symptoms described by the patients themselves and the clinical histories of the patients. Symptom data are important keys to assessing recovery progress for patients who were recently discharged from hospital. However, symptom data are difficult to be quantified, because each patient case is unique, and the observations and perceptions on symptoms by the patients themselves are subjective.

There currently exist a number of studies about information systems which aid doctors in making clinical decisions. A clinical decision support system (CDSS) links health observations with health knowledge to influence health choices by clinicians for improving health care. A CDSS uses several items of patient data to generate case-specific advice.1

The first well-known CDSS, AAPHelp, was developed in 1972. It was a computer-aided diagnosis system to support clinical assessment and decision-making for Acute Abdominal Pain (A.A.P.) and was based on clinical evidence and best practices from the UK and Europe. It implemented an electronic data collection protocol and prompted clinicians to take a thorough and accurate clinical assessment. It provided definitions of clinical symptoms and signs, access to large databases of information about patients with A.A.P., and a display of real outcomes for patients with similar clinical presentation to the patient who was being evaluated.2

DXplain and Quick Medical Reference were successful and commercialized systems originating in the 1980s. DXplain was developed in 1987 and used a set of clinical findings (signs, symptoms, laboratory data) to produce a ranked list of diagnoses which might explain (or be associated with) the clinical manifestations. DXplain included 2,200 diseases and 5,000 symptoms in its knowledge base. DXplain provided justification for why each of these diseases should be considered, suggested what further clinical information would be useful to collect for each disease, and listed what clinical manifestations, if any, would be unusual or atypical for each of the specific diseases.3

Quick Medical Reference developed in 1989 was a diagnostic decision-support system with a knowledge base of diseases, diagnoses, findings, disease associations, and lab information. It was designed for three types of purposes: as an electronic textbook, as an intermediate level spreadsheet for the combination and exploration of simple diagnostic concepts, and as an expert consultant program with information from the primary medical literature on almost 700 diseases and more than 5,000 symptoms, signs, and labs.4

From reviews of previous studies, CDSS provide clinical decision support through correlation of patient symptom data, diseases, and patient outcomes by the implementation of information technologies (such as expert system and machine learning). CDSS have been used in medical and healthcare practice for a number of years, but their application to telehealthcare is rarely found.

This paper presents the development of a Telehealthcare Decision Support System (TDSS) for patients recently discharged from hospital. The TDSS collects symptom data from patients and clinical histories from the Hospital Information System, and uses machine learning algorithms to generate a predictive model to classify patients and provide a degree of urgency for the patient to return to the hospital, which is probably the most important decision to be made by the nursing team at the call center, and has critical impact to the medical cost and recovery progress of the patient in such telethealthcare application scenario.

This paper is organized as follows. Section 2 describes the data collection and the methodology of constructing the TDSS. Sections 3 present three experiments to validate the performance of the TDSS. Finally, Section 4 concludes the paper.

Method

System overview

Figure 2 depicts the flowchart of constructing the proposed system. In the training phase, training data are preprocessed data which are composed from patients' raw data and manually validated results (the actual clinical histories of the patient returning to the hospital). The predictive model is built by machine learning algorithm with the training data as input, and can be updated periodically to improve the precision of prediction by increasing patient cases. In the prediction phase, test data are preprocessed data which are composed from new patients' raw data. Test data are classified by the predictive model to generate prediction results (a degree of urgency for the patient to return to the hospital). Details of each item in the flowchart are described as follows.

Fig. 2. The flowchart of constructing the TDSS

Input data

Patients’ raw data are the combination of symptom data and clinical histories of the patients. Symptom data are observed at home by the patients themselves. Clinical histories are input from the Hospital Information System. Data preprocessing converts the text data of clinical records into numeric options. After data preprocessing, data are organized into 49 parameters for each patient case. Values for each parameter are simplified into positive integers. Parameters that involve a date are input as a calculation of difference in days from the relevant date. These 49 parameters are categorized as follows:

(1)  Status of patient, which includes a survey of medication compliance, sleep conditions and emotions, date of discharge from hospital, etc. (Table 1)

(2)  Symptoms of patient, which includes a survey of symptoms of pain, redness, swelling, and wound fluid, such as frequency of pain, level of pain, temperature to touch of the wounded area, color of wound fluid, etc. (Table 2)

(3)  Observations of wound, which includes locations of wounds, biggest size of wounds, etc. (Table 3)

(4)  Clinical history of surgery, which includes the primary department for diagnosis, surgery method, surgery date (Table 4)

Table 1. Parameters for input – status of patient

No.

Item

Value options

Example

1

Apply the medicine to the wound regularly or not

0 No, 1 Yes

1

2

The frequency of applying the medicine

x times in y days

1, 1

3

Take the medicine on time (obey the prescription)

0 No, 1 Yes

1

4

The frequency of taking the medicine

x times in y days

4, 1

5

The condition of sleep

0 Normal
1 Lost sleep
2 Sleepy (drowsy)

0

6

The condition of emotions

0 Normal (stable)
1 Depressed
2 Angry (irritable)
3 Other

0

7

Other problems

0 No, 1 Yes

0

8

Date of discharge from hospital

yyyy/MM/dd

2012/07/17

9

Days after discharge from hospital

x days

4

Table 2. Parameters for input – symptoms of patient

No.

Type

Item

Value options

Example

10

Pain

Pain

0 No, 1 Yes

1

11

Frequency of pain

0 all the time
1 only in the day
2 only at night
3 sometimes

0

12

Level of pain

0 No pain
1 Little pain
2 Some pain
3 Much pain
4 Very painful
5 Extremely painful

1

13

Take the painkillers

0 No, 1 Yes

1

14

Red and swollen

Redness in the wound

0 No, 1 Yes

0

15

Swelling in the wound

0 No, 1 Yes

0

16

Take the Anti-inflammatory

0 No, 1 Yes

0

17

Circulation color of the wound

0 Normal skin color
1 White
2 Purple
3 Black

0

18

Temperature to touch of wound

0 Normal
1 Feels cold
2 Feels hot

0

19

Wound fluid

Fluid in the wound

0 No, 1 Yes

1

20

Color of wound fluid

0 Transparent
1 White
2 Light yellow
3 Yellow
4 Red
5 Green
6 Yellow green
7 Brown (Coffee color)

0

21

Bad smell of wound fluid

0 No, 1 Yes

0

Table 3. Parameters for input – observations of wound

No.

Item

Value options

Example

22

Wound

0 No, 1 Yes

1

23

Wound due to surgery or accident

0 Accident
1 Surgery

1

24

Location of the wound

head

0 No, 1 Yes

0

25

neck

0 No, 1 Yes

1

26

chest

0 No, 1 Yes

0

27

abdomen

0 No, 1 Yes

0

28

back

0 No, 1 Yes

0

29

arm

0 No, 1 Yes

0

30

hand

0 No, 1 Yes

0

31

ham

0 No, 1 Yes

0

32

crus

0 No, 1 Yes

0

33

foot

0 No, 1 Yes

0

34

Number of wounds

0~999

1

35

Biggest size of wounds

0 less than 5 cm
1 6~15 cm
2 16 cm and above

0

36

Stiches removed or not

0 No, 1 Yes

1

37

Wound healed (closed)

0 No, 1 Yes

1

38

Patient’s history with diabetes

0 No, 1 Yes

0

Table 4. Parameters for input – clinical history of surgery

No.

Item

Value options

Example

39

Surgery

0 No, 1 Yes

1

40

Primary department for diagnosis

1   Division of Surgery

2   Division of Orthopedics

3   Division of Gynecology

4   Division of Obstetrics

5   Division of Cardiology
& Cardiovascular Surgery

6   Division of Ophthalmology

7   Division of Otolaryngology

8   Division of General Medicine

9   Division of Metabolism & Endocrinology

1

41

Surgery method

external fixation

0 No, 1 Yes

0

42

encased in plaster

0 No, 1 Yes

0

43

incision

0 No, 1 Yes

0

44

puncture

0 No, 1 Yes

0

45

drainage

0 No, 1 Yes

0

46

endoscopic examination
and operation

0 No, 1 Yes

1

47

suture

0 No, 1 Yes

0

48

Surgery date

yyyy/MM/dd

2012/06/17

49

Days after surgery

x days

34

Prediction result

The prediction result of TDSS is a single output, the urgency degree for the patient to return to the hospital, with five possible options of assigned degrees of urgency (Table 5). In other words, each patient case will be classified into one of these five groups.

Table 5. The prediction result of TDSS

No.

Urgency
degree

Indication of the need to return to the hospital

1

1

No need for advanced tracking

2

3

Tracking needed in one week

3

5

Tracking needed in three days

4

7

Suggest that an appointment be made with doctor in three days
and tracking needed again in three days

5

9

Suggest that patient return to hospital immediately
and tracking needed again in 24 hrs

Manual validation of training data

The training data of patient cases is validated manually with their actual clinical histories of returning to the hospital (Table 6), which is referred to as “retrospective chart reviews” to provide clinical evidence-based results. Details of how each patient case (instance) is manually validated with his/her actual clinical histories of returning to the hospital are described in Table 7.

Table 6. Clinical history of returning to the hospital

No.

Item

Value options

Example

1

Date of returning to the hospital

yyyy/MM/dd

2012/07/20

2

Type of returning to the hospital

1   Scheduled

2   Unscheduled

3   Emergent

1

3

Primary department
for diagnosis

1   Division of Surgery

2   Division of Orthopedics

3   Division of Gynecology

4   Division of Obstetrics

5   Division of Cardiology & Cardiovascular Surgery

6   Division of Ophthalmology

7   Division of Otolaryngology

8   Division of General Medicine

9   Division of Metabolism & Endocrinology

1

4

Progress

1   Becoming better: progressing favorably, doing well, making favorable progress, relief, remission, palliation, lessen, diminish

2   Becoming worse: taking an unfavorable course, making unsatisfactory progress, getting worse, aggravated, worsen, unfavorable progress, recurrence, relapse, flare-up, recrudescence

1

5

Days after discharge from hospital

x days

14

6

Days after last tracking date

x days

7

Table 7. Logic of manual validation of clinical history of returning to the hospital

No.

Days after discharged from hospital or the last tracking date

Type of returning to the hospital

Progress

Urgency
degree

Corresponded indication of returning to the hospital

1

> 7 days and
<= 30 days

Any

Any

1

No need for advanced tracking

2

> 3 days and
<= 7 days

Any

Any

3

Tracking needed in one week

3

<= 3 days

Scheduled

Becoming better

5

Tracking needed in three days

4

<= 3 days

Scheduled

Becoming worse

7

Suggest that an appointment be made with doctor in three days and tracking needed again in three days

5

Unscheduled
or emergent

Any

6

<= 1 days

Any

Any

9

Suggest that patient return to hospital immediately and tracking needed again in 24 hrs

In one year of clinical practice of the Smart Care telehealthcare service, data for 1,568 patients were collected. This study was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki. Participants gave informed consent prior to their inclusion in the study. Table 8 shows the distribution for the primary departments for diagnosis of the 1,568 patients.

Table 8. Distribution for the primary departments for diagnosis of the 1,568 patients.

No.

Main diagnosed department

Patients

Percentage

1

Division of Surgery

456

29.1%

2

Division of Orthopedics

430

27.4%

3

Division of General Medicine

342

21.8%

4

Division of Gynecology

160

10.2%

5

Division of Obstetrics

64

4.1%

6

Division of Cardiology & Cardiovascular Surgery

60

3.8%

7

Division of Otolaryngology

47

3%

8

Division of Ophthalmology

9

0.6%

9

Division of Metabolism & Endocrinology

0

0%

Total

1,568

100%

Patient data from only 1,467 patient cases were used in this study. The remaining 101 patient cases were not included due to lack of clinical histories of returning to the hospital. The distribution of these patients’ validated output value is shown in Table 9. For most of the patients, the indication for a need to return to the hospital was validated as either “Tracking needed in one week” with a degree of urgency of 3 (67.6%) or “No need for advanced tracking” with a degree of urgency of 1 (19.7%). This distribution also shows that the baseline precision of classifying patients is 67.6% because of the biggest group.

Table 9. Distribution of  validated output value of the 1,467 patients

No.

Urgency
degree

Indication of returning to the hospital

Patients

Percentage

1

1

No need for advanced tracking

289

19.7%

2

3

Tracking needed in one week

992

67.6%

3

5

Tracking needed in three days

71

4.8%

4

7

Suggest that an appointment be made with doctor in three days and tracking needed again in three days

100

6.8%

5

9

Suggest that patient return to hospital immediately and tracking needed again in 24 hrs

15

1.0%

Total

1,467

100%

Machine learning algorithms

The core of TDSS is the predictive model. In this special application scenario for patients discharged from the hospital, the predictive model is built and updated by machine learning algorithms for learning properties from symptom data and clinical histories within the training data to classify patient cases into five groups of different degrees of urgency for the patient to return to the hospital. Supervised learning which has known labels of desired outputs (five degrees of urgency) is used in this study. Five well known machine learning algorithms, Bayesian Network, Decision Trees, Logistic Regression, Neural Networks and Support Vector Machines (SVM), which have proven their effectiveness for classification of complicated data by statistics-based methodologies in previous studies, were evaluated in this research. The predictive model with best performance would be selected and deployed into the TDSS.

Predictive models were generated by these five classic machine learning algorithms. The performance of each predictive model was evaluated by a standard experiment, 10-fold cross-validation and leave-one-out cross validation (LOOCV) which are common techniques for estimating the performance of predictive models. In the standard experiment, the original samples are randomly partitioned into 70% for training, 20% for validation and 10% for testing. In the 10-fold cross-validation, the original samples are randomly partitioned into 10 subsets. A single subset is retained as the validation data for testing the model, and the remaining 9 subsets are used as training data. This step is repeated 10 times. Each subset is used exactly once as the validation data. Finally, 10 results from 10 subsets are averaged to produce single performance estimation. The advantage of this method is that all observations are used for both training and validation, and each observation is used for validation exactly once. In the LOOCV, a single observation from the original sample is used as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data. The LOOCV is specially suit for sparse dataset. It can train on as many examples as possible to increase the precision of predictive model.

Results

The performance of the five classic machine learning algorithms for this application scenario is evaluated and compared. In Table 10, the collected 1,467 patient cases data are separated into training set (70%), validation set (20%) and test set (10%). Each machine learning algorithm was trained by the training set of 1,027 instances and evaluated its performance by the test set of 147 instances in Table 10. Bayesian Network had the best performance in this application scenario. It correctly classified 112 instances (76.2%) in the test set with 147 instances, which is 8.6% higher than the baseline precision (67.6%).

Table 10. Performance comparison among machine learning algorithms by using the training set (70%), validation set (20%) and test set (10%)

Machine learning algorithms

Training set (70%), validation set (20%) and test set (10%)

Correctly classify

Incorrectly classify

Performance

Bayesian Network

112

35

76.2%

Tree J48

111

36

75.5%

Support Vector Machine (SVM)

111

36

75.5%

Logistic

110

37

74.8%

Neural Networks

106

41

72.1%

Each predictive model generated by different machine learning algorithm was also trained and validated by 10-fold cross validation and LOOCV in Table 11. Tree J48 had the best performance in both validations. It can correctly classify 1,166 instances (79.5%, 11.9% higher than the baseline precision) in the 10-fold cross validation and 1,174 instances (80.0%, 12.4% higher than the baseline precision). The performance of Bayesian Network was very close too.

Table 11. Performance comparison among machine learning algorithms by using the 10-fold cross-validation and leave-one-out cross-validation

Machine learning algorithms

10-fold cross-validation

Leave-one-out cross-validation (LOOCV)

Correctly classify

Incorrectly classify

Performance

Correctly classify

Incorrectly classify

Performance

Tree J48

1,166

301

79.5%

1,174

293

80.1%

Bayesian Network

1,163

304

79.3%

1,170

297

79.8%

Support Vector Machine (SVM)

1,147

320

78.2%

1,147

320

78.2%

Logistic

1,142

325

77.9%

1,150

317

78.4%

Neural Networks

1,045

422

71.2%

1,084

383

73.9%

Note that in the first experiment (Table 10), predictive models are trained by 1,027 instances. In the second and third experiment (Table 11), predictive models are trained by 1,321 instances (10-fold cross validation) and 1,466 instances (LOOCV). The performance of all machine learning algorithms improves when more patient cases (instances) are involved. The best performance is 80.1% with 1,466 instances learned. The performance of is expected to continue improving with increased number of instances and further refinement of the input parameters. In this study, the same 49 input parameters were used for all patients, considering we had to obtain enough patient cases for machine learning. A better performance of the predictive models may result if different input parameters are designed for patients from different departments of diagnosis.

On the other hand, the input parameters may contain many redundant or irrelevant features which provide no useful information. If input parameters can be reduced without apparent loss of precision, the cost of patient data collection and the complication for classification of patient cases will also be reduced, which results in a more practical system. Feature selection is such a technique for selecting a subset of relevant features from the input parameters. Feature selection will also help the understanding of the most important input parameters in the model.

After the trainings were completed, this research used the Best first technique for feature selection to find which features (parameters) are important for prediction. As shown in Table 12, the most important 6 features for prediction were suggested. Note that three features are related to the frequency and level of pain reported by the patients. As shown in Table 13, the performances of predictive models which were trained by 6 important features only were close to the performances of predictive models which were trained by full 49 features. Again Bayesian Network had the best all-around performance and finally was selected to be the machine learning algorithm of the TDSS.

Table 12. The result of feature selection

Order

No.

Type

Item

Value options

1.

11

Pain

Frequency of pain

0 all the time
1 only in the day
2 only at night
3 sometimes

2.

37

Wound healed (closed)

0 No, 1 Yes

3.

12

Pain

Level of pain

0 No pain
1 Little pain
2 Some pain
3 Much pain
4 Very painful
5 Extremely painful

4.

19

Wound fluid

Fluid in the wound

0 No, 1 Yes

5.

41

Surgery method

external fixation

0 No, 1 Yes

6.

10

Pain

Is pain or not

0 No, 1 Yes

Table 13. Performance of predictive models which are trained by 6 features

Machine learning algorithms

Training set (70%),
 validation set (20%)
and test set (10%)

10-fold cross-validation

Leave-one-out cross-validation (LOOCV)

Performance

Performance

Performance

Neural Networks

76.9%

79.4%

79.5%

Bayesian Network

76.2%

79.3%

79.8%

Logistic

76.2%

78.9%

78.5%

Tree J48

75.5%

79.7%

79.3%

Support Vector Machine (SVM)

75.5%

78.5%

78.5%

During the one-year period, the nursing team at the call center was also requested to make recommendations (predictions) of degrees of urgency for the patients to return to the hospital, based on symptoms and clinical histories for these 1,467 patients. Table 14 shows the result of this manual classification. This confusion matrix, also known as a contingency table or an error matrix, is a specific table layout that visualizes the performance of classification result. Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. The name stems from the fact that it makes it easy to see if this system is confusing two classes (i.e. commonly mislabeling one as another).The numbers in the diagonal marked in bold face are the correctly classified patient cases (instances). Nurses correctly classified in 1,173 instances, and the precision is 79.96%, which is slightly lower than that of the TDSS (80.08%). A detailed comparison is shown in Table 15.

Table 14. The confusion matrix of nursing team (clinical personnel)

Real class \ Classified as

1

3

5

7

9

1 No need for advanced tracking

184

95

10

0

0

3 Tracking needed in one week

51

925

15

1

0

5 Tracking needed in three days

4

16

36

14

1

7 Suggest that an appointment be made with doctor in three days
and tracking needed again in three days

5

43

27

21

4

9 Suggest that patient return to hospital immediately
and tracking needed again in 24 hrs

2

3

1

2

7

Correctly classified 1,173 instances of 1,467 patients (79.96%)

Table 15. Performance comparison of TDSS and nursing team

Performance evaluation item

Calculation

Nurses

TDSS

Sensitivity

TP / ( TP + FN ) = 1 - FNR

0.296

0.209

(-8.7%)

Specificity

TN / ( TN + FP ) = 1 - FPR

0.988

0.990

(+0.2%)

PPV

TP / ( TP + FP )

0.680

0.632

(-4.8%)

NPV

TN / ( TN + FN )

0.923

0.936

(+1.3%)

FPR

FP / ( FP + TN ) = 1 - Specificity

0.012

0.010

(-0.2%)

FNR

FN / ( FN + TP ) = 1 - Sensitivity

0.704

0.791

(+8.7%)

TP, true positive; TN, true negative; FP, false positive; FN, false negative; PPV, positive predictive value; NPV, negative predictive value; FPR, false positive rate; FNR, false-negative rate

Conclusions and Discussion

This paper presents the development of the Telehealthcare Decision Support System (TDSS) for patients recently discharged from hospital. Symptom data from patients and clinical histories from the Hospital Information System were collected, and a predictive model was built to provide a degree of urgency for the patient to return to the hospital. The performance of predictive models generated by five classic machine learning algorithms was evaluated, and finally Bayesian Network was selected to implement in the TDSS. Based on the 1,467 patient cases collected over a one-year period, the performance of correct prediction by the TDSS is comparable to that by the nursing team at the call center.

The predictive model trained by the 6 most important features has satisfactory performance. Therefore the TDSS can be reduced to 6 input features only. The performance of the TDSS is expected to improve continuously, as more cases of patient data are collected and input into the TDSS.

The TDSS has been implemented in one of the largest commercialized telehealthcare practices in Taiwan administered by Min-Sheng General Hospital, and is currently assisting the nursing team at the call center to make judgments on the need for patients to return to the hospital for further checkup.

Acknowledgments

This research is sponsored by Department of Industrial Technology, Ministry of Economic Affairs, Taiwan, National Science Council, Taiwan and Ministry of Education, Taiwan. This research is also supported by the Smart Care Inc., Taiwan. These supports are gratefully acknowledged.

Disclosure Statement

No competing financial interests exist. The funding source did not influence study design, data collection, data analysis, interpretation, or presentation.

References

1.        Wyatt J. A method for developing medical decision-aids applied to ACORN, a chest pain advisor. In: Doctorate of Medicine Thesis, Oxford University 1991.

2.        de Dombal FT, Leaper DJ, Staniland JR, et al. Computer-aided diagnosis of acute abdominal pain. British Medical Journal 1972;2(5804):9-13.

3.        Barnett GO, Cimino JJ, Hupp JA, et al. DXplain. An evolving diagnostic decision-support system. The Journal of the American Medical Association 1987;258(1):67-74.

4.        Miller RA, Masarie FE Jr. Use of the Quick Medical Reference (QMR) program as a tool for medical education. Methods of Information in Medicine. 1989;28(4):340-345.