Skip to content

realize-lab/In-app-misinfo

Repository files navigation

In-app interventions and LLM-generaterd misinformation

Overview

This project aims to study how effective various in-app interventions are at improving humans' ability to detect health misinformation generated by large language models (LLMs).

Reproduce the results

To reproduce the results, please install R and python.

  1. Run results_final.r in R to generate all the necessary odds ratios and confidence intervals.
  2. Install the relevant python packages in requirements.txt using pip install -r requirements.txt.
  3. Run plots_final.ipynb in Jupyter Notebook to generate the plots.

Check the figures folder for the generated plots.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages