Tabular Classification
Scikit-learn
English
hierarchical
healthcare
ehr
copd
clinical-risk
tabular
scikit-learn
clustering
unsupervised
Instructions to use stormid/copd-model-e with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Scikit-learn
How to use stormid/copd-model-e with Scikit-learn:
from huggingface_hub import hf_hub_download import joblib model = joblib.load( hf_hub_download("stormid/copd-model-e", "sklearn_model.joblib") ) # only load pickle files from sources you trust # read more about it here https://skops.readthedocs.io/en/stable/persistence.html - Notebooks
- Google Colab
- Kaggle
Processing
This folder contains scripts for processing raw EHR data, along with the mappings required to carry out the initial processing steps.
Before running any scripts, first create a directory called 'Model_E_Extracts' within the 'S:/data' directory.
NB: The below processing scripts can be run in any order.
Admissions
- process_admissions.py - SMR01 COPD/Resp admissions per patient per year
- process_comorbidities.py - SMR01 comorbidities per patient per year
Demographics
- process_demographics.py - DOB, sex, marital status and SIMD data
Labs
- process_labs.py - lab test values per patient per year, taking the median lab test value from the 2 years prior
Prescribing
- process_prescribing.py - prescriptions per patient per year