plip

plip

PathologyFoundation

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.

308 Stars
30 Forks
308 Watchers
Python Language
Cost to Build
$28.0K
Market Value
$71.0K

Growth over time

1 data points  ·  2025-03-04 → 2025-03-04
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about plip

Question copied to clipboard

What is the PathologyFoundation/plip GitHub project? Description: "Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone plip

Clone via HTTPS

git clone https://github.com/PathologyFoundation/plip.git

Clone via SSH

[email protected]:PathologyFoundation/plip.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the plip issue tracker:

Open GitHub Issues