Text Classification
Transformers
Safetensors
English
roberta
code
algorithms
competitive-programming
multi-label-classification
codebert
text-embeddings-inference
Instructions to use Ahmedjr/codebert-algorithm-tagger with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Ahmedjr/codebert-algorithm-tagger with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Ahmedjr/codebert-algorithm-tagger")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Ahmedjr/codebert-algorithm-tagger") model = AutoModelForSequenceClassification.from_pretrained("Ahmedjr/codebert-algorithm-tagger") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- bda077f5e2902166d4f30c60ec109f7794ca50eb5b09930709ae9c64dd85591b
- Size of remote file:
- 5.84 kB
- SHA256:
- ff0f7232072f05c17bf6222f67fb04980789d2548781dadaf9c3b0748d02fa05
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.