AI-Lab-Makerere/beans
Viewer • Updated • 1.3k • 8.99k • 45
How to use JLB-JLB/Model_folder with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="JLB-JLB/Model_folder")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("JLB-JLB/Model_folder")
model = AutoModelForImageClassification.from_pretrained("JLB-JLB/Model_folder")This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the beans dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|---|---|---|---|---|
| 0.0488 | 0.91 | 30 | 0.1366 | 0.9449 |
| 0.0077 | 1.82 | 60 | 0.0508 | 0.9775 |
| 0.0057 | 2.73 | 90 | 0.0366 | 0.9888 |
| 0.0042 | 3.64 | 120 | 0.0171 | 0.9888 |
Base model
google/vit-base-patch16-224-in21k