Skip to content

Feat: LLM compression efficiency metrics #437

@DanielusG

Description

@DanielusG

I was thinking, what if we logged every read and grep by the model, and after compression, observed when the model re-reads that file? This would indicate that it actually needed it but pruned it by mistake, and is now forced to re-read it. This would help determine which model is wiser in its use of compression without shooting itself in the foot.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions