plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
How to download and setup plip
Open terminal and run command
git clone https://github.com/PathologyFoundation/plip.git
git clone is used to create a copy or clone of plip repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with plip https://github.com/PathologyFoundation/plip/archive/master.zip
Or simply clone plip with SSH
[email protected]:PathologyFoundation/plip.git
If you have some problems with plip
You may open issue on plip support forum (system) here: https://github.com/PathologyFoundation/plip/issuesSimilar to plip repositories
Here you may see plip alternatives and analogs
deeplearning4j machine-learning-for-software-engineers incubator-mxnet spaCy cheatsheets-ai gun php-ml TensorLayer awesome-artificial-intelligence AlgoWiki papers-I-read kong EmojiIntelligence PyGame-Learning-Environment deep-trading-agent caffe2 AirSim pipeline diffbot-php-client mycroft-core iOS_ML warriorjs nd4j incubator-kie-optaplanner ML-for-High-Schoolers Dragonfire auto_ml gophernotes deeplearning4j-examples DeepPavlov